Dispelling the complete nonsense that the platform is 'dying'.
I don’t see any point to the non profit umbrella now.
Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.
> We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
> We are collaborating to figure out the details. Thank you so much for your patience through this.
1- So what was the point of this whole drama, and why couldn't you have settled like this adults?2- Now what happens to Microsoft's role in all of this?
3- Twitter is still the best place to follow this and get updates, everyone is still make "official" statements on twitter, not sure how long this website will last but until then, this is the only portal for me to get news.
https://twitter.com/sama/status/1727206691262099616 (+ follow-up https://twitter.com/sama/status/1727207458324848883)
https://twitter.com/gdb/status/1727206609477411261
https://twitter.com/miramurati/status/1727206862150672843
UPD https://twitter.com/gdb/status/1727208843137179915
Given the grandstanding and chaos on both sides, it’ll be interesting to see if OpenAI undergo a radical shift in their structure.
We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners.
Whose trust?
Larry Summers? Some odd choices
It's only natural to confuse what is happening with what we wish to happen. After all, when we imagine something, aren't we undergoing a kind of experience?
A lot of people wish Twitter were dying, even though it's it, so they interpret evidence through a lens of belief confirmation rather than belief disproof. It's only human to do this. We all do.
(Thank you for calling Twitter Twitter)
It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.
I was a bit alarmed by the allegations in this article
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.
I don't even understand what Sam brings to the table. Leadership? He doesn't seem great at leading an engineering or research department, he doesn't seem like an insightful visionary... At best, Satya gunning for him signalled continued strong investment in the space. Yet the majority of the company wanted to leave with him.
What am I missing?
IMO Kevin tweeting that MS will hire and match comp of all OpenAI employees was amazing negotiation tactic because that meant employees could sign the petition without worrying about their jobs/visas
Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126
That doesn't change the fact post-Elon Twitter has severely degraded in terms of user experience (rate limits, blue check spam, API pay-wall, etc.) and Elon isn't doing the platform any favours by continuing to participate in detrimental ways (seen in the recent advertiser exodus).
Cognitive dissonance
It’s also curious that none of the board members have necessarily have any experience directly with AI research
From an outsider's perspective, and until there's a clear explanation available, it just seems like a massive bundler.
Under Sam's leadership they've opened up a new field of software. Most of the company threatened to leave if he didn't return. That's incredible leadership.
Honestly, it is hard to believe a board st this level acting the way they did.
It looks to me like the real victim here is the "for humanity" corporate structure. At some point, the money decided it needed to be free.
If these are the stewards of this technology, it’s time to be worried now.
https://en.m.wikipedia.org/wiki/Bret_Taylor
> On November, 21st, 2023, Bret Taylor replaced Greg Brockman as the chairman of OpenAI.
...with three footmark "sources" that all point to completely unrelated articles about Bret from 2021-2022.
That's quite a slap at the board... a polite way of calling them ignorant, ineffective dilettantes.
Most of the company was ready to quit over him being fired. So yes, leadership.
OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.
Altman's/Microsoft’s takeover of the former non-profit is now complete.
Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.
As an example of how much faster GPT-4 has made my workflow was the outage this evening — I tried Anthropic, openchat, Bard, and a few others and they were between not useful and worse than just looking at forums and discord it’s 2022.
Bret Taylor (Salesforce) was trying to poach OpenAI employees publicly literally yesterday.
Adam D'Angelo orchestrated the coup, because he doesn't want OpenAI GPTs market to compete with his Poe market.
Larry Summers. Larry f**kin' Summers?!
Altman was trying to remove one of the board members before he was forced out. Looks like he got his way in the end, but I'm going to call Altman the primary instigator because of that.
His side was also the "we'll nuke the company unless you resign" side.
I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.
It's not actually news, it's entertainment and self-aggrandizement by everyone involved including the audience.
In other news, it's nice knowing a tool that's essential to my day-to-day operations is no longer in jeopardy, haha.
Exactly. This is seriously improper and dangerous.
It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.
I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...
same with Ashlee Vance (the other journo reporting on this) and all the main players (Sam/Greg/Ilya/Mira/Satya/whoever) also make their first announcement on twitter.
I don't know about the funding part of it, but there is no denying it, the news is still freshest on twitter. Twitter feels just as toxic for me as before, in fact I feel community notes has made it much better, imho.
____
In some related news, I finally got bluesky invite (I don't have invite codes yet or I would share here)
and people there are complaining about... mastadon and how elitist it is...
that was an eye opener.
nice if you want some science-y updates but it's still lags behind twitter for news.
> And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.
https://nitter.net/satyanadella/status/1726509045803336122
I guess everyone was just playing a bit loose and fast with the truth and hype to pressure the board.
It sure feels like a bad look for Satya to announce a huge hire Sunday night and then this? But what do I know.
Edit: don't know why the downvotes. You're welcome to think it's an obviously smart political move. That it's win/win either way. But it's a very fair question that every tech blogger on the planet will be trying to answer for the next month!
Seems like there's no way to win with Twitter. You may not be interested in Twitter, but Twitter is interested in you.
I don’t understand why that’s not a conflict of interest?
But honestly both products pale in comparison to OpenAI’s underlying models’ importance.
Kissinger (R, foreign policy) once said that Summers (D, economic policy) should be given an advisory post in any WH administration, to help shoot down bad ideas.
This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was
- Altman tries to push out another board member
- That board member escalates by pushing Altman out (and Brockman off the board)
- Altman's side escalates by saying they'll nuke the company
Altman's side won, but how can we say that his side didn't cause any of this instability?
Almost all advice on the internet I have been reading says that you should not take counteroffer, but I guess it's different for CEO ;)
... so is this the end of the drama? Do I get to stop checking the news religiously?
The previous board thought Sam was trying to get full control of the board, so they ousted him. But of course they weren't happy with OpenAI being destroyed either.
Now they agreed to a new board without Sam/Greg, hoping that that will avoid Sam ever getting full control of the board in the future.
Satyas maneuvering gave Sama huge leverage.
But the most common metrics for whether or not a social media platform is dying, are things like ad revenue and MAU.
I contribute to neither, since I'm not a user nor an ad viewer, and yet I'm still able to "get the news".
So my point is this: the fact that important news are still there, won't guarantee that the platform stays successfull
Discoverability on Mastodon is abysmal. It was too much work for me.
I tend to get my news from Substack now.
But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.
https://twitter.com/teddyschleifer/status/172721237871736880...
I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.
I don't think the company has changed at all. It succeeded after all.
Former Secretary, SalesForce CEO who was board chair of Twitter when infiltrated with FBI [1] and the fall-guy for the coup is the new board? Not one person from the actual company - not even Greg who did nothing wrong??? [1] - https://twitter.com/NameRedacted247/status/16340211499976867...
The two think-tank women who made all this happen conveniently leave so we never talk about them again.
Whatever, as long as I can use their API.
Something does still seem not flattering towards Microsoft about reneging on the Microsoft offer though.
That event wasn't some unprovoked start of this history.
> That board member escalates by pushing Altman out (and Brockman off the board)
and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.
Don't you feel out of date on substack? especially since things move so fast sometimes, like with this open-ai fiasco?
(what isn't)
No one thinks Larry Summers has any insights on AI. Adding Larry Summers is something you do purely to beg powerful, unaccountable people "please don't stop us, we're on your side".
Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].
That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.
[1] https://cset.georgetown.edu/publication/decoding-intentions/
It's not the conflict of interest it would be if it was the board of a for profit corporation that was basically identical to the existing for-profit LLC but without the lyaers above it ending with the nonprofit that the board actually runs, because OpenAI is not a normal company, and making profit is not its purpose, so the CEO of a company that happens to have a product in the same space as the LLC is not in a fundamental conflict of interest (there may be some specific decisions it would make sense for him to recuse from for conflict reasons, but there is a difference between "may have a conflict regarding certain decisions" and "has a fundamental conflict incompatible with sitting on the board".)
Its not a conflict for a nonprofit that raises money with craft faires to have someone who runs a for-profit periodic craft faire in the same market on its board. It is a conflict for a for profit corporation whose business is running such a craft faire to do so, though.
Say what you want about Elon’s leadership but his instinct to buy Twitter was completely right. To me it seemed like any social network crap but he realized it was important.
They did fire him, and it didn't work. Sam effectively became "too big to fire."
I'm sure it will be framed as a compromise, but how can this be anything but a collapse of the board's power over the commercial OpenAI arm? The threat of firing was the enforcement mechanism, and its been spent.
He did help shoot down the extra spending proposals that would have made inflation today even worse. Not sure how that caused suffering and death for anyone.
And he is an adult, which is a welcome change from the previous clowncar of a board.
Altman and Toner came into conflict over a mildly critical paper Toner wrote involving Open AI and Altman tried to have her removed from the board.
This is probably what precipitated this showdown. The pro safety/nonprofit charter faction was able to persuade someone (probably Ilya) to join with them and oust Sam.
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google. But somehow, I think it's still possible that a huge company could be created by a person like this.
(And of course, more important than creating a huge company, is creating insanely great products.)
Kind of a shocking choice.
I seriously doubt they care. They got away with it. No one should have believed them in the first place. I’m guessing they don’t have their real identity visible on their profile anywhere.
Isn’t this true though? Says more about Harvard than Summers to be honest.
https://www.swarthmore.edu/bulletin/archive/wp/january-2009_...
Who knows.
> and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies.
I'm guessing "zero". The faction that opposed OpenAI being a figleaf nonprofit covering a functional subsidiary of Microsoft lost when basically the entire workforce said they would go to Microsoft for real if OpenAI didn't surrender.
> I think having Sam return as CEO is a good outcome for OpenAI
Its a good result for investors in OpenAI Global LLC and the holding company that holds a majority stake in it.
The nonprofit will probably hang around because there are some complexities in unwinding it, and the pretext of an independent (of Microsoft) safety-oriented nonprofit is useful in covering lobbying for a regulatory regime that puts speedbumps in the way of any up-and-coming competitors as being safety-oriented public interest, but for no other reason.
I use these tools as one of many tools to amplify my development. And I’ve written some funny/clever satirical poems about office politics. But really? I needed to call Verizon to clear up an issue today, it desperately wanted me to use their assistant. I tried for the grins. A tool that predictively generates plausibility is going to have its limits. It went from cute/amusing to annoying as hell and give me a “love agent” pretty quickly.
That this little TechBro Drama has dominated a huge amount of headlines (we’ve been running at least 3 of the top 30 posts at a time on HN here related to this subject) at a time when there is so much bigger things going on in the world. The demise of Twitter generated less headlines. Either the news cycles are getting more and more desperate, or the software development ecosystem is struggling to generate fund raising enthusiasm more and more.
Finally the OpenAI saga ends and everybody can go back to building!
3 things that turned things around imo:
1. 95% of employees signing the letter
2. Ilya and Mira turning Team Sam
3. Microsoft pulling credits
Things AREN’T back to where they were. OpenAI has been through hell and back. This team is going to ship like we’ve never seen before.In either case the end effect is the essentially the same. Either Sam is at MSFT and can continue to work with openAI IP, or he's back at openAI and do the same. In both cases the net effect for MSFT is similar and not materially different, although the revealed preference of Sam's return to openAI indicates the second option was the preferred one.
[Edit for grammar]
Satya offered sama a way forward as a backup option.
And I think it says a lot about sama that he took that option, at least while things were playing out. He and Greg could have gotten together capital for a startup where they each had huge equity and made $$$$$$. These actions from sama demonstrate his level of commitment to execution on this technology.
https://twitter.com/sama/status/1727207458324848883
He's has now changed his mind, sure, but that doesn't mean Satya lied.
If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.
Altman's OpenAI? He will want you to "go to him first".
Did they company change? I am not convinced.
He's also financially literate enough to know that it's poor form release market-moving news right before the exchanges close a Friday. They could have waited an hour.
> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...
Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.
See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.
Just like in that Oppenheimer movie. A sanctimonious witch hunt serving as pretext for a personal vendetta.
(Note that Summers is, I'm told, on a personal level, a dick. The popular depiction is not that wrong on that point. But he's the right pick for this job -- see my other comments in this thread.)
During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.
Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.
Not just the public, but also the employees. I doubt there are more than a handful of employees who care about AI Safety.
Seems likely that it won't be there by OpenAI for too long. MS have a tendency to break up acquisitions so this gives me hope.
Nothing, he has to do with political connections, and OpenAI's main utility to Microsoft is as hand puppet for lobbying for the terms it wants for the AI marketplace in the name of OpenAI's nominal "safety" mission.
All of the big ad companies (Google, Amazon, Facebook) have, like, a scandal per month, yet the ad revenue keeps coming. Meltdown was a huge scandal, yet Intel keeps pumping out the chips.
Let's say Sam called his broker and said to him on Friday we'll before the market closes. Buy MSFT stock. Then he made his announcement on Sunday and on Monday he told his broker to sell that stock before he announced he's actually coming back to (not at all)OpenAI. That would be illegal insider trading.
If he never calls his broker/his friends/his mom to buy/sell stock there's nothing illegal.
If you open up openai.com, the navigation menu shows
Research, API, ChatGPT, Safety
I believe they belong to @ilyasut, @gbd, @sama and Helen Toner respectively?
She was fighting an idelogical battle that needs full industry buy in, legitimate or not that's not how you win people over.
If she's truely a rationalist as she claims then a rationalist would be realistic understanding that if your engineers can just leave and do it somewhere else tomorrow you aren't making progress. Taking on the full might of US capitalism via winning over the fringe half of a non profit board is not the best strategy. At best it was desperate and naive.
If that’s not a fertile soil for conspiracy theory, I don’t know what could ;)
To be fair, this attempt at firing was extremely hasty, non transparent and inconsistent.
Interesting to see how the board evolves from this. From what I know broadly there were 2 factions, the faction that thought Sam was going too fast which fired him and the faction that thought Sam’s trajectory was fine (which included Sam and Greg). Now there’s a balance on the board and subsequent hires can tip it one way or the other. Unfortunately a divided board rarely lasts and one faction will eventually win out, I think Sam’s faction will eventually win out but we’ll have to wait and see.
One of the saddest results of this drama was Greg being ousted from OpenAI. Greg apart from being brilliant was someone who regularly 80-90 hour work weeks into OpenAI, and you could truly say he dedicated a good chunk of his life into building this organization. And he was forced to resign by a board who probably never put a 90 hour work week in their entire life, much less into building OpenAI. A slap on the face. I don’t care what the board’s reasoning was but when their actions caused employees who dedicated their lives to building the organization resign (especially when most of them played no part at all into building this amazing organization), they had to go in disgrace. I doubt any of them will ever reach career highs higher than being on OpenAI’s board, and the world’s better off for it.
P.S., Ilya of course is an exception and not included in my above condemnation. He also notably reversed his position when he saw OpenAI was being killed by his actions.
I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".
FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.
It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.
It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.
She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.
All it takes is a narrative, just like the one that happened in OpenAI and the way it is currently being shown in Anthropic.
The fact that you think current inflation has anything to do with that stimulus bill back then shows how little you understand about any of this.
Larry Summers is the worst kind of person. Somebody who is nothing but a corporate stooge trying to act like the adult by being "reasonable", when that just means enriching his corporate friends, letting people suffer and not spending money (which any study will tell you is not the correct approach to situations like this because of multiplying effects they have down the line).
Some necessary reading:
In regards to watering it down to get GOP votes: https://archive.nytimes.com/krugman.blogs.nytimes.com/2009/0...
He made the right calls, fast, with limited information.
Things further shifted from plan a to b to… whatever this is.
Despite that, MSFT still came out on top.
Consider if Satya didn’t say anything. Suppose MSFT stood back and let things play out.
That’s a gap for google or some competitor to make a move. To showcase their stability and long term business friendly vision.
Instead by moving fast, doing the “right” thing, this opportunity was denied and used to MSFTs benefit.
If the board folded, it would return to the stays quo. If the board held, MSFT would have secured OpenAI, for essentially nothing.
Edit: changed board folded x2 to board folded + board held, last para.
Not judging, just observing.
Btw, I would not be pleased if Kissinger were on this board in lieu of Summers. He's already ancient, mostly checked out, and yet still I'd worry his old lust for power would resurface. And with such a mixed reputation, and plenty of people considering him a war criminal, he'd do little to assuage the AI-not-kill-everyone-ism faction.
But don't notice anything from that. That would be sexist, right Anton?
Although, he's also partly responsible for the existence of Facebook by starting Sheryl Sandberg's career. Some people might think that's good.
By all accounts he paid about double what it was worth and the value has collapsed from there.
Probably not a great idea to say anything overtly political when you own a social media company, as due to politics being so polarised in the US, any opinion is going to divide your audience in half causing a usage collapse and driving support to competing platforms.
https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...
I believe the goal of the opposing faction was mainly to avoid Sam dominating board and they achieved that, which is why they've accepted the results.
After more opinions come out, I'm guessing Sam's side won't look as strong, and he'll become "fireable" again.
Same with employees and their stock comp. Same with microsoft.
It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.
If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.
Well, he also caused the IRA to pass by telling Manchin that it wouldn't be inflationary.
But remember when he released this prediction in 2021?
> Larry Summers on U.S. economic outlook:
> 33% odds of stagflation
> 33% odds of recession
> 33% rapid growth, no surge in inflation
All that hedging and then none of those things happened!
I think the board should have been more transparent on why they made the decision to fire Sam.
Or perhaps these employees only cared about their AI work and money? The foundation would be perceived as the culprit against them.
Really sad there’s no clarity from the old board disclosed. Hope one day we will know.
It's also naive to think it was a struggle for principles. The rapid commercialization vs. principles is what the actors claim to rally their respective troops, in reality it was probably a naked power grab, taking advantage of the weak and confuse org structure. Quite an ill prepared move, the "correct" way to oust Altman was to hamstring him in the board and enforce a more and more ceremonial role until he would have quit by himself.
so you could say they intentionally don't see safety as the end in itself, although I wouldn't quite say they don't care.
At a minimum something that doesn't immediately result in a backlash where 90% of the engineers most responsible for recent AI dev want you gone, when you're whole plan is to control what those people do.
Another civilization perished in the great filter.”
I'm sure some of those employees were easily going to make $10m+ in the sale. That's a pretty great motivation tool.
Overall, I do agree with you. The board could not justify their capricious decision making and refused to elaborate. They should've brought him back on Sunday instead of mucking around. OpenAI existing is a good thing.
Things perhaps could've been different if they'd pointed to the founding principles / charter and said the board had an intractable difference of opinion with Sam over their interpretation, but then proceeded to thank him profusely for all the work he'd done. Although a suitable replacement CEO out the gate and assurances that employees' PPUs would still see a liquidity event would doubtless have been even more important than a competent statement.
Initially I thought for sure Sam had done something criminal, that's how bad the statement was.
(Sad day for popcorn sales.)
It strikes me as exactly the sort of thing she should be writing given OpenAI's charter. Recognizing and rewarding work towards AI safety is good practice for an organization whose entire purpose is the promotion of AI safety.
What is the benefit of learning about this kind of drama minute-by-minute, compared to reading it a few hours later on hacker news or next day on wall street journal?
Personally I found twitter very bad for my productivity, a lot of focus destroyed just to know "what is happening" when there was neglible drawbacks of finding about news events a few hours later.
She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.
"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.
https://news.ycombinator.com/edit?id=38375767
It will be super interesting to see the subtle struggles for influence between these three.
Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.
It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.
Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".
For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.
It's absolutely helpful for mental health, to show people that there's not some conspiracy out to disenfranchise and oppress them, rather the distribution of outcomes is a natural result of the distribution of genetic characteristics.
No, if OpenAI is reaching singularity, so are Google, Meta, and Baidu etc. so proper course of action would be to loop in NSA/White House. You'll loop in Google, Meta, MSFT and will start mitigation steps. Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.
I believe this is more a fight of ego and power than principles and direction.
Calling this a truth is pretty silly. There is a lot of evidence that human cognition is highly dependent on environment.
Microsoft is showing to investors that it is going to be an AI company, one way or the other.
Microsoft still has access to everything OpenAI does.
Microsoft has its friend, Sam, at the helm of OpenAI and with a more tighter grip on the company than ever.
Its still a win for Microsoft.
(Fortunately we are working on this very hard and making incredible progress.)
1. He tried to not buy Twitter very hard and OpenAI’s new board member forced his hand
2. It hasn’t been a good financial decision if the banks and X’s own valuation cuts are anything to go by.
3. If his purpose wasn’t to make money…all of these tweets would have absolutely been allowed before Elon bought the company. He didn’t affect any relevance changes here.
Why would one person owning something so important be better than being publicly owned? I don’t understand the logic.
Really it just shows the whole non-profit arm of the company was even more of a lie then it appeared.
”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.
Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.
Nobody is pro-apocalypse! We are drowning in things an AI could really help with.
With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.
But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.
For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).
Yeah, such a person totally blocks your startup from making billions of dollars instead of benefitting humanity.
Oh wait...
Whole charade was by GPT5 to understand the position of person sitting next to red button and secondary to stress test Hacker News.
The biggest sticking point was Sam being on the board. Ultimately, he conceded to not being on the board, at least initially, to close the deal. The hope/expectation is that he will end up on the board eventually."
(https://twitter.com/emilychangtv/status/1727216818648134101)
Also, all the stuff they started doing with the hearts and cryptic messages on Twitter (now X) was a bit ... cult-y?. I wouldn't doubt there was a lot of manipulation behind all that, even from @sama itself.
So, there is goes, it seems that there's a big chance now that the first AGI will land on the hands of a group with the antics of teenagers. Interesting timeline.
On the other hand, its quite apparent that essentially all of the OpenAI workforce (understandably, given the compensation package which creates a financial interest at odds with the nonprofit's mission) and in particular the entire executive team saw the charter as a useful PR fiction, not a mission (except maybe Ilya, though the flip-flop in the middle of this action may mean he saw it the same way, but thought that given the conflict, dumping Sam and Greg would be the only way to preserve the fiction, and whatever cost it would have would be worthwhile given that function.)
For example, there are a lot more boys than girls who struggle with basic reading comprehension. Sound familiar?
When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."
But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.
The staff never mutinied. They threatened to mutiny. That's a big difference!
Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.
[1] >>38348123
There are also more intellectually challenged men btw, but somehow that rarely gets discussed.
But the effects are quite small, and should not dissuade anyone to do anything IMO.
>It is just a joke that Facebook could be valued at $6 billion.
lol, seems HN is same since forever.
Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!
[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...
Is it? Why was the press release worded like that? And why did Ilya came up with two mysterious reasons of why board fired Sam if he had quite clearly better and more defendable reason if this goes to court. Also Adam is pro commercialization at least looking at public interviews, no?
It's very easy to make the story in brain which involves one character being greedy, but it doesn't seem it is the exact case here.
The safest option was to sign the paper, once the snowball started rolling. There was nothing much to lose, and a lot to gain.
This has been my single strongest takeaway from this saga: Twitter remains the centre of controversy. When shit hit the fan, Sam and Satya and Swisher took to Twitter. Not Threads. Not Bluesy. Twitter. (X.)
Second, yes, you are being sexist, and irrational. What you’re doing is exactly the same as the reasons that it’s racist and irrational to say “whites are better at x”.
You’re cherry picking data to examine, to reach a conclusion that you want to reach. You’re ignoring relevant causal factors - or any causal factors at all, in fact, aside from the spurious correlation you’ve assumed in your conclusion.
You’re ignoring decades of research on the subject - although in your defense, you’re probably just not aware of it.
Most irrationally of all, you’re generalizing across an entire group, selected by a factor that’s only indirectly relevant to the property you’re incorrectly generalizing about.
As such, “sexist” is just a symptom of fundamentally confused and under-informed thinking.
Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.
I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.
OpenAI is one of half a dozen teams [0] actively working on this problem, all funded by large public companies with lots of money and lots of talent. They made unique contributions, sure. But they're not that far ahead. If they stumble, surely one of the others will take the lead. Or maybe they will anyway, because who's to say where the next major innovation will come from?
So what I don't get about these reactions (allegedly from the board, and expressed here) is, if you interpret the threat as a real one, why are you acting like OpenAI has some infallible lead? This is not an excuse to govern OpenAI poorly, but let's be honest: if the company slows down the most likely outcome by far is that they'll cede the lead to someone else.
[0]: To be clear, there are definitely more. Those are just the large and public teams with existing products within some reasonable margin of OpenAI's quality.
AGI is still very far away and the fear mongering is nothing but PR stunt.
But the devs need their big payout now. Which explains the mutiny.
The “safety” board of directors drank their own koolaid a bit too much.
Decades of research shows that teachers give girls better grades than boys of the same ability. This is not some new revelation.
https://www.forbes.com/sites/nickmorrison/2022/10/17/teacher...
https://www.bbc.co.uk/news/education-31751672
A whole cohort of boys got screwed over by the cancellation of exams during Covid. That is just reality, and no amount of creepy male feminist posturing is going to change that. Rather, denying issues in boys education is liable to increase male resentment and bitterness, something we've already witnessed over the past few years.
above that in the charter is "Broadly distributed benefits", with details like:
"""
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
"""
In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.
Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.
Here is a meta analysis on the subject: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057475/
I have said recently elsewhere SV now devalues builders but it is not just VCs/sales/product, a huge amount is devops and sre departments. They make a huge amount of noise about how all development should be free and the value is in deploying and operating the developed artifacts. Anyone outside this watching would reasonably conclude developers have no self respect, hardly aspirational positions.
Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.
Personally as I watched the nukes be lobbed I'd rather not be the person who helped lob them. And hope to god others look at the same problem (a misaligned AI that is making insane decisions) with the exact same lens. It seems to have worked for nuclear weapons since WW2, one can that we learned a lesson there as a species.
The Russian Stanislav Petrov who saved the world comes to mind."Well the Americans have done it anyways" was the motivation and he didn't launch. The cost of error was simply too great.
I'm not sure what faction Bret and Larry will be on. Sam will still have power by virtue of being CEO and aligned with the employees.
Thrive was about to buy employee shares at a $86 bn valuation. The Information said that those units had 12x since 2021.
https://www.theinformation.com/articles/thrive-capital-to-le...
Those are different things.
Nuclear war is exactly the kind of thing for which we do have excellent expertise. Unlike for AI safety which seems more like bogus cult atm.
Nuclear power would be the best form of large scale power production for many situations. And smaller scale too in forms of emerging SMR:s.
If they'd made their move a few months ago when he was out scanning retinas in Kenya they might have had more success.
allegedly again, the board wanted Sam to stop doing this, and now he was trying to do the same thing with some saudi investors, or actually already did it behind their back, i dont know
Seams very unlikely, board could communicate that. Instead they invented some BS reasons, which nobody took as a truth. It looks like more personal and power grab. The staff voted for monetization, people en mass don't care much about high principals. Also nobody wants to work under inadequate leadership. Looks like Ilya lost his bet, or Sam is going to keep him around?
https://www.hollywoodreporter.com/business/business-news/sar...
Microsoft is showing that it is still able to capture important scale ups and 'embrace' them, whilst also acting as if they have the moral high ground, but in reality are doing research with a high governance errors and potential legal problems away from their premises. and THAT is why stakeholders like him.
In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)
In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.
Did they succeed? Too early to tell for sure, but there are at least question marks around it.
How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.
How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.
The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.
This kind of speculative mud slinging makes this place seem more like a gossip forum.
I do however expect the boards of directors of important companies to avoid publicly supporting obviously regressive ideas such as this gem.
He tells what others like to hear, and manages to gain money out of it
* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.
As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?
What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.
* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:
https://openai.com/blog/introducing-superalignment
But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)
This is a very naive take.
Who sat before Congress and told them they needed to control AI other people developed (regulatory capture)? It wasn't the OpenAI board, was it?
1. Have some explanation
2. Have a new CEO who is willing and able to do the job
If you can't do these things, then you probably shouldn't be firing the CEO.
Traditional response to this happening is to say something about your "priors" being wrong instead of taking responsibility.
Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.
This outcome WAS microsoft's role in all this. Satya offering sam a ceo like position to create a competing product was leverage for this outcome.
They have. At length. E.g.,
https://ai100.stanford.edu/gathering-strength-gathering-stor...
https://arxiv.org/pdf/2307.03718.pdf
https://eber.uek.krakow.pl/index.php/eber/article/view/2113
https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...
https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...
For just a handful of examples from the vast literature published in this area.
The world is not zero-sum. Most economic transactions benefit both parties and are a net benefit to society, even considering externalities.
GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.
For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.
As a non-profit board member, I'm curious why their bylaws are so crummy that the rest of the board could simply remove two others on the board. That's not exactly cunning design of your articles of association ... :-)
The one thing in Microsoft has stayed constant from Gates to Ballmer to Satya: you should never, ever form a close alliance with MS. They know how to screw alliance partners. i4i, Windows RT partners, Windows Phone Partners, Nokia, HW partners in Surface. Even Steve Jobs was burned few times.
Really it's no more descriptive than "do good", whatever doing good means to you.
They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.
This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.
The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?
An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?
.
Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: >>38376263
The fact that Adam D'Angelo is still on the new board apparently is much more baffling than the fact that Tonrer or Ilya are not.
The ways we build AI will deeply affect the values it has. There is no neutral option.
Most of the new AI startups are one trick ponies obsessively focused on LLM's. LLM's are only one piece of the puzzle.
Nobody comes out of this looking good. Nobody. If the board thought there was existential risk, they should have been willing to commit to it. Hopefully sensible start-ups can lure people away from their PPUs, now evident for the mockery they always were. It's beyond obvious this isn't, and will never be, a trillion dollar company. That's the only hope this $80+ billion Betamax valuation rested on.
I'm all for a comedy. But this was a waste of everyones' time. At least they could have done it in private.
If Summers had in fact limited himself to the statistical claims, it would have been less of an issue. He would still have been wrong, but he wouldn't have been so obviously sexist.
It's easy to refute Summers' claims, and in fact conclude that the complete opposite of what he was saying is more likely true. "Gender, Culture, and mathematics performance"(https://www.pnas.org/doi/10.1073/pnas.0901265106) gives several examples that show that the variability as well as male-dominance that Summers described is not present in all cultures, even within the US - for example, among Asian American students in Minnesota state assessments, "more girls than boys scored above the 99th percentile." Clearly, this isn't an issue of "intrinsic aptitude" as Summers claimed.
> A whole cohort of boys got screwed over by the cancellation of exams during Covid.
I'm glad we've identified the issue that triggered you. But your grievances on that matter are utterly irrelevant to what I wrote.
> no amount of creepy male feminist posturing is going to change that
It's always revealing when someone arguing against bigotry is accused of "posturing". You apparently can't imagine that someone might not share your prejudices, and so the only explanation must be that they're "posturing".
> increase male resentment and bitterness
That's a choice you've apparently personally made. I'd recommend taking more responsibility for your own life.
There's a UtopAI / utopia joke in there somewhere, was that intentional on your part?
Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).
If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.
If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.
If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.
In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.
We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.
His worst personal problem is that he keeps replying "fascinating" to neo-Nazis and random conspiracy theorists because he wants to be internet friends with them.
Everybody's guard is going to be up around Sam from now on. He'll have much less leverage over this board than he did over the previous one (before the other three of nine quit). I think eventually he will prevail because he has the charm and social skills to win over the other independent members. But he will have to reign in his own behavior a lot in order to keep them on his side versus D'Angelo
I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.
It is a seminal work which provides a great introduction into these ideas and concepts.
I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.
I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.
The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.
If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.
If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.
GPT4 in image viewing mode doesn't seem to be nearly as smart as text mode, and image generation IME barely works.
(It's a Berkeley cult so of course it's got those.)
Yes, they do help explain that. This does not preclude other influences. You can't go two sentences without making a logical error, it's quite pathetic.
I'll do you a favour and disregard the rest of your post - you deviate from the mean a bit too much for this to be worth it. Just try not to end up like Michael Kimmel, lol.
The reason everyone thinks it's about safety seems largely because a lot of e/acc people on Twitter keep bringing it up as a strawman.
Of course, it might end up that it really was about safety in the end, but for now I still haven't seen any evidence. The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.
It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".
I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.
And Helen Toner was already as much of a fed as you could want; she had exactly the resume a CIA agent would have. (Probably wasn't though.)
This isn't about political correctness. It's far less reasonable than that.
No, but some parts of it very much are. The whole point of AI safety is keeping it away from those parts of the world.
How are Sam and Satya going to do that? It's not in Microsoft's DNA to do that.
They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months
The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.
And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".
There isn't just a big red button that says "destroy company" in the basement. There will be partnerships to handle, severance, facilities, legal issues, maybe lawsuits, at the very least a lot of people to communicate with. Companies don't just shut themselves down, at least not multi billion dollar companies.
The real ”sheer stupidity” is this very belief.
If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.
Sam also played his hand extremely well; he's likely learned from watching hundreds of founder blowups over the years. He never really seemed angry publicly as he gained support from all the staff including Ilya & Mira. I had little doubt Emmett Shear would also welcome Sam's return since they were both in the first YC batch together.
Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.
Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).
Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.
Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).
This line of argument is facile and destructive to conversation anyway.
It boils down to, "Pointing out corporate hypocrisy isn't valuable because corporations are liars," and (worse) it implies the other person is naive.
In reality, we can and should be outraged when corporations betray their own statements and supposed values.
That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.
They were both members of the inaugural class of Y-Combinator, and all of Shear's published actions since accepting the role (like demanding evidence of Sam' wrongdoing) seem to have helped Sam return to his role.
I don't think it's a stretch to say that he did win, in that he might have accomplished exactly what he wanted when he accepted the role.
Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.
In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].
We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.
[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...
I can't believe that
It's also unimaginative; having a variety of traits is itself good for society, which means you don't need variation in genetics to cause it. It's adaptive behavior for the same genes to simply lead to random outcomes. But people who say "genes cause X" probably wouldn't like this because they want to also say "and some people have the best genes".
From here on out there is going to be far more media scrutiny on who gets picked as a board member, where they stand on the company's policies, and just how independent they really are. Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.
> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat
Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.
There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.
[0] https://cset.georgetown.edu/publication/decoding-intentions/
Let's make the situation a little different. Could MSF pay a private surgery with investors to perform reconstruction for someone?
Could they pay the surgery to perform some amount of work they deem aligns with their charter?
Could they invest in the surgery under the condition that they have some control over the practices there? (Edit - e.g. perform Y surgeries, only perform from a set of reconstructive ones, patients need to be approved as in need by a board, etc)
Raising private investment allows a non profit to shift cost and risk to other entities.
The problem really only comes when the structure doesn't align with the intended goals - which is something distinct to the structure, just something non profits can do.
No, it's to ensure it doesn't kill you and everyone you love.
So you let your product teams figure out how the brand needs to be protected and the workflow needs to be shaped, like always, and you don't defer to some outside department full of beatniks in berets or whatever.
The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests. The naïveté from the NPO faction was believing they’d be able to develop these capacities outside the strict control of the military industrial complex when AI has been established as part of the new Cold War with China.
For me, the whole thing is just human struggle. It is about fighting for people they love and care, against some people they dislike or indifferent to.
You might be able to imagine a world where there was an external company that did the same thing as for-profit OpenAI, and OpenAI nonprofit partnered with them in order to get their AI ideas implemented (for free). OpenAI nonprofit is basically getting a good deal.
MSF could similarly create an external for-profit hospital, funded by external investors. The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.
Of course, there's a lot of sketchiness in practice, which we can see in this situation with Microsoft influencing the direction of nonprofit OpenAI even though it shouldn't be. I think there would have been real legal issues if the Microsoft deal had continued.
Sam and Greg will be joining Microsoft.
And:
Sam and Greg have in principle agreed to join Microsoft but not signed anything.
If Microsoft has (now) agreed to release either of them (or anyone else) from contractual obligations, then the first one was true.
If not, then the first was was a lie, and the second one was true.
This whole drama has been punctuated by a great deal of speculation, pivots, changes and, bluntly, lies.
Why do we need to sugar coat it?
Where the fuck is this new magical Microsoft research lab?
Microsoft preparing a new office for openAI employees? Really? Is that also true?
Is Sam actually going to be on the board now, or is this another twist in this farcical drama when they blow it off again?
I see no reason to, at least point, give anyone involved the benefit of the doubt.
Once the board actually changes, or Microsoft actually does something, I’m happy to change my tune, but I’m calling what I see.
Sam did not join Microsoft at any point.
ChatGPT turning into Skynet and nuking us all is a much more remote problem.
How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.
>Expected value and probability have no place in these discussions.
I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.
Anthropic formed from people who split from OpenAI, and xAI in response to either the company or ChatGPT, so people would have plenty of options.
If the staff had as little to go on as the rest of us, then the board did something that looked wild and unpredictable, which is an acute employment threat all by itself.
https://openai.com/blog/introducing-superalignment
I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.
In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".
I believe this position reflects the thoughts of the majority of AI researchers, including myself. It is concerning that we do not fully understand something as promising and potentially dangerous as AI. I'm actually on Ilya's side; labeling his attempt to uphold the original OpenAI principles as an act of "coup" is what is happening now.
From a 2016 New Yorker article:
> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Eh? That would be an awful idea. They have no expertise on this and government institutions like thus are misaligned with the rest of humanity by design. E.g. NSA recruits patriots and has many systems, procedures and cultural aspects in place to ensure it keeps up its mission of spying on everyone.
This meme was already dead before the recent events. Whatever the company was doing, you could say it wasn’t open enough.
> a real disruptor must be brewing somewhere unnoticed, for now
Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years? It has been the most high profile tech innovator recently.
> OpenAI does not have in its DNA to win
This is so vague. What does it not have in its… fundamentals? And what is to “win”? This statement seems like just generic unhappiness without stating anything clearly. By most measures, they are winning. They have the best commercial LLM and continue to innovate, they have partnered with Microsoft heavily, and they have so far received very good funding.
[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)
You are absolutely right. There is no question about that the AI will be an expert at subtly steering individuals and the whole society in whichever direction it does.
This is the core concept of safety. If no-one steers the machine then the machine will steer us.
You might disagree with the current flavour of steering the current safety experts give it, and that is all right and in fact part of the process. But surely you have your own values. Some things you hold dear to you. Some outcomes you prefer over others. Are you not interested in the ability to make these powerful machines if not support those values, at least not undermine them? If so you are interested in AI safety! You want safe AIs. (Well, alternatively you prefer no AIs, which is in fact a form of safe AI. Maybe the only one we have mastered in some form so far.)
> because of X, we need to invade this country.
It sounds like you value peace? Me too! Imagine if we could pool together our resources to have an AI which is subtly manipulating society into the direction of more peace. Maybe it would do muckraking investigative journalism exposing the misdeeds of the military-industrial complex? Maybe it would elevate through advertisement peace loving authors and give a counter narrative to the war drums? Maybe it would offer to act as an intermediary in conflict resolution around the world?
If we were to do that, "ai safety" and "alignment" is crucial. I don't want to give my money to an entity who then gets subjugated by some intelligence agency to sow more war. That would be against my wishes. I want to know that it is serving me and you in our shared goal of "more peace, less war".
Now you might say: "I find the idea of anyone, or anything manipulating me and society disgusting. Everyone should be left to their own devices.". And I agree on that too. But here is the bad news: we are already manipulated. Maybe it doesn't work on you, maybe it doesn't work on me, but it sure as hell works. There are powerful entities financially motivated to keep the wars going. This is a huuuge industry. They might not do it with AIs (for now), because propaganda machines made of meat work currently better. They might change to using AIs when that works better. Or what is more likely employ a hybrid approach. Wishing that nobody gets manipulated is frankly not an option on offer.
How does that sound as a passionate argument for AI safety?
Their bank accounts current and potential future numbers?
Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
By the way the AI scientists get a lot of respect and admiration see Ilya for example.
People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.
The reputation boost is probably worth a lot more than the direct financial compensation he's getting.
That's the current Yudkowsky view. That it's essentially impossible at this point and we're doomed, but we might as well try anyway as its more "dignified" to die trying.
I'm a bit more optimistic myself.
Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.
And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.
Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.
There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.
Btw: do you think ridicule eould be helpful here?
That doesn't matter that much. If your analysis is correct then it means a (tiny) minority of OpenAI cares about AI safety. I hope this isn't the case.
But my reading of this drama is that the board were seen as literally insane, not that Altman was seen as spectacularly heroic or an underdog.
There are only three groups of people who could be subject to betrayal here: employees, investors, and customers. Clearly they did not betray employees or investors, since they largely sided with Sam. As for customers, that's harder to gauge -- did people sign up for ChatGPT with the explicit expectation that the research would be "open"?
The founding charter said one thing, but the majority of the company and investors went in a different direction. That's not a betrayal, but a pivot.
That would be a really bad take on climate change.
AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...
An unaligned base model doesn't answer questions at all and is hard to use for anything, including evil purposes. (But it's good at text completion a sentence at a time.)
An instruction-tuned not-RLHF model is already largely friendly and will not just eg tell you to kill yourself or how to build a dirty bomb, because question answering on the internet is largely friendly and "aligned". So you'd have to tune it to be evil as well and research and teach it new evil facts.
It will however do things like start generating erotica when it sees anything vaguely sexy or even if you mention a woman's name. This is not useful behavior even if you are evil.
You can try InstructGPT on OpenAI playground if you want; it is not RLHFed, it's just what you asked for, and it behaves like this.
The one that isn't even instruction tuned is available too. I've found it makes much more creative stories, but since you can't tell it to follow a plot they become nonsense pretty quickly.
That's incorrect. The new members will be chosen by D'Angelo and the two new independent board members. Both of which D'Angelo had a big hand in choosing.
I'm not saying Larry Summers etc going to be in D'Angelo's pocket. But the whole reason he agreed to those picks is because he knows they won't be in Sam's pocket, either. More likely they will act independently and choose future members that they sincerely believe will be the best picks for the nonprofit.
> "...[there] is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population..."
Sheesh, of all the things to be cancelled for...
They just have different ideas about one or more of: how likely another team is to successfully charge ahead while ignoring safety, how close we are to AGI, how hard alignment is.
They can't control the CEO, neither fire him.
They can't take actions to take back the back control from Microsoft and Sam because Sam is the CEO. Even if Sam is of the utmost morality, he would be crazy to help them back into a strong position after last week.
So it's the Sam & Microsoft show now, only a master schemer can get back some power to the board.
And say what you want about Larry Summers, but he's not going to be either Sam's or even Microsoft's bitch.
Eh? Polls on the matter show widespread public support for a pause due to safety concerns.
So for example if you asked Sydney, the early version of the Bing LLM, some fact it might get it wrong. It was trained to report facts that users would confirm as true. If you challenged it’s accuracy what do you want to happen? Presumably you’d want it to check the fact or consider your challenge. What it actually did was try to manipulate, threaten, browbeat, entice, gaslight, etc, and generally intellectually and emotionally abuse the user into accepting its answer, so that it’s reported ‘accuracy’ rate goes up. That’s what misaligned AI looks like.
It has literally nothing to do with that. The reason he's on the board now is because D'Angelo wanted him on it. You could have a problem with that, but you can't use his inclusion as evidence that the board lost.
It helps having somebody with government ties on board now.
But "how money creation works" isn't the same thing as "how the financial system works". I guess the financial system mostly works over ACH.
We can see what happens when banks don't lend out deposits, because that's basically what caused SVB to fail. So by the contrapositive, they aren't really operating then.
And if Sam controlled it it also wouldn’t have.
Very harsh words for some of the highest paid smartest people on the planet. The employees built GPT-4 the most advanced AI on the planet, what did you build? Do you still claim they’re more deficient in critical thinking compared to you.
what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.
if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.
Everything has been pure speculation. I would curb my judgement if I were you, until we actually know what happened.
To an extent the promise of the non- profit was that they would be safe, expert custodians of AI development driven not primarily by the profit motive, but also by safety and societal considerations. Has this larger group been ‘betrayed’? Perhaps
Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.
What I don’t understand is why they were allowed to stay on the board with all these conflicts of interests all the while having no (financial) stake in OpenAI. One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.
Yes. You are right on this.
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"
I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.
With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)
So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.
They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.
I'm sure has been a lot of critical thinking going on. I would venture a guess that employees decided that Sam's approach is much more favorable for the price of their options than the original mission of the non-profit entity.
My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.
I'm sure Sam is a charismatic guy, but generally speaking folks will support a whole lot when a multi million dollar payday is on the line.
Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?
You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").
Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"
1. Censorship of information
2. Cover-up of the biases and injustices in our society
This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.
Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.
Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:
1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits. 2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues. 3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes. 4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories. 5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology. 6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society. 7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.
Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.
Bonhoeffer's theory of stupidity: https://www.youtube.com/watch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...
The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.
So they bowed.
If the "other side" (board) had put up a SINGLE convincing argument on why Sam had to go maybe the employees would have not supported Sam unequivocally.
But, atleast as an outsider, we heard nothing that suggests board had reasons to remove Sam other than "the vibes were off"
Can you really accuse the employees of groupthink when the other side is so weak?
A lot of Apple's engineering and product line back then owe their provenance and lineage to NeXT.
https://www.theverge.com/2023/11/20/23968988/openai-employee...
The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.
The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)
How do you know?
> look at how “quickly” everyone got pulled into
Again, how do you know?
The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.
Was there any concrete criticism in the paper that was written by that board member? (Genuinely asking, not a leading question)
In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.
> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.
Is this the "path" to AGI? Who knows! But it is a path to benefitting humanity as probably Sam and his camp see it. Does Ilya have a different plan? If he does, he has a lot of catching up to do while the current productization of ChatGPT and GPTs continue marching forward. Maybe he sees a great leap forward in accuracy in GPT-5 or later. Or maybe he feels LLMs aren't the answer and theres a completely new paradigm on the horizon. Regardless, they still need to answer to the fact that both research and product need funds to buy and power GPUs, and also satisfy the MSFT partnership. Commercialization is their only clear answer to that right now. Future investments will likely not stray from this approach, else they'll fund rivals who are more commercially motivated. Thats business.
Thus, i'm all in on this commercially motivated humanity benefitting GPT product. Let the market take OpenAI LLMs to where they need/want it to. Exciting things may follow!
Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.
Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.
No in relation to my edit, because we have no existing mechanism for the AGI risk to happen. We have hypotheses about what an AGI could or could not do. It could all be incorrect. Playing around with likelihoods that have no basis in reality isn't helping there.
Where we have known and fully understood risks and we can actually estimate a probability there we might use that somewhat to guide efforts (but that invites potentially complacency that is deadly).
> Broadly distributed benefits
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Hell, it's the first bullet point on it!
You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'
I'm sure that if Ilya had been removed from his role, the revolt movement would have been similar.
I've started to like Sam only when he was removed from his position.
Would you trust someone who doesn't believe in responsible governance for themselves, to apply responsible governance elsewhere?
Using a near-AGI to help align an ASI, then use the ASI to help prevent the development of unaligned AGI/ASI could be a means to a safer world.
No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.
It's also not surprising that people who are near the SV culture will think that AGI needs money to get developed, and that money in general is useful for the kind of business they are running. And that it's a business, not a charity.
I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Reminds me of a quote: "A civilization is a heritage of beliefs, customs, and knowledge slowly accumulated in the course of centuries, elements difficult at times to justify by logic, but justifying themselves as paths when they lead somewhere, since they open up for man his inner distance." - Antoine de Saint-Exupery.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
I personally think it's weird if he really settles back in, especially given the other guys who resigned after the fact. There must be lots of other super exciting new things for him to do out there, and some pretty amazing leadership job offers from other companies. I'm not saying OpenAI will die out or anything, but surely it has shown a weak side.
Blue tick just means user bought a subscription (X Premium) now - one of the features is "reply prioritization", so top replies to popular tweets are from blue ticks.
In a way AI is no different from old school intelligence, aka experts.
"We need to have oversight over what the scientists are researching, so that it's always to the public benefit"
"How do we really know if the academics/engineers/doctors have everyone's interest in mind?"
That kind of thing has been a thought since forever, and politicians of all sorts have had to contend with it.
We see most powerful people are in it for the money and power ego trip, and literally nothing else. Pesky morals be damned. Which may be acceptable for some ad business but here stakes are potentially everything and we have no clue what actual % the risk is.
Its to me very similar to all naivety particle scientists expressed in its early days and then reality check of realpolitik and messed up humans in power when bombs were done, used and then hundred thousand more were produced.
The real teams here seem to be:
"Team Board That Does Whatever Altman Wants"
"Team Board Provides Independent Oversight"
With this much money on the table, independent oversight is difficult, but at least they're making the effort.
The idea this was immediately about AI safety vs go-fast (or Microsoft vs non-Microsoft control) is bullshit -- this was about how strong board oversight of Altman should be in the future.
You forgot about Apple.
So who holds all the data in closed silos? Google and Facebook. We may have already lost the battle on achieving “open and fair” AI paradigm long time ago.
Their actions was the complete opposite of open. Rather than, I don’t know, being open and talking to the CEO to share concerns and change the company, they just threw a tantrum and fired him.
But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.
On the other hand, if they had some serious concerns, serious enough to fire the CEO in such a disgraceful way, I don't understand why they don't stick to their guns, and explain themselves. If you think OpenAI under Sam's leadership is going to destroy humanity, I don't understand how they (e.g. Ilya) reverted their opinions after a day or two.
As if its so unbelievable that someone would want to prevent rogue AI or wide-scale unemployment, instead thinking that these people just want to be super moderators and people to be politically correct
that ship sailed long ago , no?
But i agree that the company seems less trustworthy now, like it's too CEO-centered
With Sam at the head, especially after Microsoft backing him, they will most likely do the opposite. Meaning a deeper integration with Microsoft.
If it wasn't already, OpenAI is now basically a Microsoft subsidiary. With the advantage for Microsoft of not being legally liable for any court cases.
I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.
But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.
If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?
In my experience, product people who know what they are doing have a huge impact on the success of a company, product, or service. They also point engineering efforts in the right direction, which in turn also motivate engineers.
I saw good product people leaving completely destroy a team, never seen that happen with a good engineer or individual contributor, no matter how great they were.
I don't see how this particular statement underscores your point. OpenAI is a non-profit with the declared goal of making AI safe and useful for everyone; if it fails to reach that or even actively subverts that goal, destroying the company does seem like the ethical action.
>Microsoft owned 49% of the for-profit part of OpenAI.
>OpenAI's training, inference, and all other infrastructure were running entirely on Azure credits.
>Microsoft/Azure were the only ones offering OpenAI's models/APIs with a business-friendly SLA, uptime/stability, and the option to host them in Azure data centers outside the US.
OpenAI is already Microsoft.
I don't know if I agree, but the argument did make me think.
If both is not possible, I'd also rather compromise on the "conficts of interest" part than on the member's competency.
Stupidity is defined by self-harming actions and beliefs, not by low IQ.
You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.
Sam pontificated about fusion power, even here on HN. Beyond investing in Helion, what did he do? Worldcoin. Tempting impoverished people to give up biometric data in exchange for some crypto. And serving as the face of mass-market consumer AI. Clearly that's more cool, and more attractive to VCs.
Meanwhile, what have fusion scientists and engineers done? They kept on going, including by developing ML systems for pure technological effect. Day after day. They got to a breakthrough just this year. Scientists and engineers in national labs, universities, and elsewhere show what a real commitment to technological progress looks like.
Equity is a big part of CEO pay packages and OpenAI has weird equity structure, plus there was a very real chance OpenAI's value would go to $0 leaving whatever promised comp worthless. So Emmett likely took the job for other reasons.
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project
That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.
Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.
Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.
Not someone I would like to see running the world’s leading AI company
[1] https://www.thenation.com/article/world/harvard-boys-do-russ...
Edit: also https://prospect.org/economy/falling-upward-larry-summers/
https://www.npr.org/sections/money/2022/03/22/1087654279/how...
And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...
And just like Dropbox, in the end, what disruption? GPT will just be a checkbox for products others build. Cool tech, but not a full product.
Of course, I'd love to be proven wrong.
I actually get the impression from the media that he's a bit shifty and sales orientated but seems effective at getting stuff done.
Is that "far, far" in your view?
Threads had a rushed rollout which resulted in major feature gaps that disincentivized users from doing anything beyond creating their profiles.
Notable figures and organizations have little reason to fully migrate off Twitter unless Musk irreversibly breaks the site and even he is not stupid enough to do that (yet?). So with most of its content creators still in place, Twitter has no risk of following the path of Digg.
"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."
The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.
A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.
So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...
Most are doing the work they love and four people almost destroy it and cannot even explain why they did it. If I were working at the company that did this I would sign, too. And follow through on the threat of leaving if it comes to that.
They will take over the board, and then steer it in some weird dystopian direction.
Ilya knows that IMO, he was just more principled than Altman.
Engineer working at "INSERT BIG TECH COMPANY" is no guarantee or insight about critical thinking at another one. The control and power over OpenAI was always at Microsoft regardless of board seats and access. Sam was just a lieutenant of an AI division and the engineers were just following the money like a carrot on a stick.
Of course, the engineers don't care about power dynamics until their paper options are at risk. Then it becomes highly psychological and emotional for them and they feel powerless and can only follow the leader to safety.
The BOD (Board of Directors) with Adam D'Angelo (the one who likely instigated this) has shown to have taken unprecedented steps to remove board members and fire the CEO for very illogical and vague reasons. They already made their mark and the damage is already done.
Lets see if these engineers that signed up to this will learn from this theatrical lesson of how not to do governance and run an entire company into the ground with unspecified reasons.
They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:
> One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman was said to have given two board members different opinions about a member of personnel.
Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.
If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.
[0] https://www.businessinsider.com/openais-employees-given-expl...
Sales usually is. It's the consequences, post-sale, that they're usually less effective at dealing with.
Any other outcome would have split OpenAI quite dramatically and put them back massively.
Big assumption to say 'effectively controlled by Microsoft' when Microsoft might have been quite happy for the other option and for them to poach a lot of staff.
Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.
I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.
Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.
Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.
That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).
So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...
One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?
Obviously, Microsoft has some influence here. That's no different to any other large investor. But the key factors are:
1. Lack of a good narrative from the board as to why they fired Sam;
2. Failure to loop in Microsoft so they're at least prepared from a communications front and feel like they were part of the process. The board can probably give them more details why privately;
3. People leaving in protest speaks well of Sam;
4. The employee letter speaks well of Sam;
5. The interim CEO clown show and lack of an all hands immediately after speaks poorly of the board.
Or medieval Spain? About as likely... The Soviets weren't even able to get the factory floors clean enough to consistently manufacture the 8086 10 years after it was already outdated.
> maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Unfortunately not other system besides capitalism has enabled consistent technological progress for 200+ years. Turns out you need to pool money and resources to achieve things ..
"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"
Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.
I have seen firing a great/respected/natural leader engineer result in pretty much the whole engineering team just up and leaving.
They lied to protect the stock. That should be illegal. In fact, it is illegal.
Does OpenAI have by-laws committing itself to being "open" (as in open source or at least their products freely and universally available)? I thought their goals were the complete opposite of that?
Unfortunately, in reality Facebook/Meta seems to be more open than "Open"AI.
There is no way MS is going to let something like ChatGPT-5 build better software products than what they have for sale.
This is an assassination and I think Ilya and Co know it.
To be fair, Fridman grilled Musk on his views today, also in the context of xAI, and he was less clear cut there, talking about the problem that there's actually very little source code, it's mostly about the data.
Given that Claude sucks so bad, and this week’s events, I’m guessing that the ChatGPT secret sauce is not as replicable as some might suggest.
Also, they will find a hard time joining any other board from now on.
They should have backed up the claims in the letter. They didn’t.
This means they didn’t have how to backup their claims. They didn’t think it through… extremely amateurish behavior.
The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?
That's why they'll sometimes tell you to stop donating. That's here in EU at least (source is a relative who volunteers for such an org).
The OpenAI employees overwhelmingly rejected the groupthink of the Effective Altruism cult.
People underestimate the effects of social pressure, and losing social connections. Ilya voted for Sam's firing, but was quickly socially isolated as a result
That's not to say people didn't genuinely feel committed to Sam or his leadership. Just that they also took into account that the community is relatively small and people remember you and your actions
However, when that one article does come up, and I know the details inside/out , the comments sections are rife with bad assumptions, naïve comments and misinformation.
Do you have a source for this?
Stock options usually have a limited time window to exercise, depending on their strike price they could have been faced with raising a few hundred thousand in 30 days, to put into a company that has an uncertain future, or risk losing everything. The contracts are likely full of holes not in favor of the employees, and for participating in an action that attempted to bankrupt their employer there would have been years of litigation ahead before they would have seen any cent. Not because OpenAI would have been right to punish them, but because it could and the latent threat to do it is what keeps people in line.
Just because they sided with Altman doesn't necessarily mean they are aligned. There could be a lack of information on the employee/investor side.
It's couched on accusing people of intentions. It focuses on ad hominem, rather than the ideas
I reckon most people agree that we should aim for a middle ground of scrutiny and making progress. That can only be achieved by having different opinions balancing each other out
Generalising one group of people does not achieve that
Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?
These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.
One thing IS clear at this point - their political alignment:
* Taylor a significant donor to Joe Biden ($713,637 in 2020): https://nypost.com/2022/04/26/twitter-board-members-gave-tho...
* Summers is a former Democrat Treasury Secretary who has shifted leftwards with age: https://www.newstatesman.com/the-weekend-interview/2023/03/w...
This is overly dramatic, but I suppose that's par for this round.
> none of this outrage would have taken place.
Yeah... I highly doubt this, personally. I'm sure the outrage would have been similar, as HN's current favorite CEO was fired.
As you can tell from previous statements I posted here, my position is that while there are undeniable potential risks to this technology, the least harmfull way to progress is 100% full public, free and universal release. The by far bigger risk is to create a society where only select organizations have access to the technology.
If you truly believe in the systemic transformation of AI, release everything, post the torrents, we'll figure out how to run it.
Google's full of top researchers and scientists who are at least as good as those at OpenAI; Sam's the reason OpenAI has a successful, useful product (GPT4), while Google has the far less effective, more lobotomized Bard.
He’s serving the right people by doing their bidding.
I just renewed by HN subscription to be able to see Season 2!
Is Ilya off the board then?
Why is Adam still on?
Brett and Larry are good choices, but they need to get that board up to 10 or so people representing a balance of perspectives and interests very quickly.
Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.
Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?
So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."
The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels
Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.
That being said, I have no idea of this guy's contributions. It's easy to dismiss entrepreneur/managers because they're not top scientists, but they also have very rare skills and without them, projects don't get done.
Next up would be an EVE corp run entirely by LLMs
Or that said apple pie was essential to their survival.
as this event turned into a farce, it's evident that neither the company nor it's key investors accounted much for the "bus factor/problem" i.e loosing a key-person threatened to destroy the whole enterprise.
for me this a failure in Managing Risk 101.
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.
https://archive.is/20231122033417/https://www.wsj.com/tech/a...
Bottom line he had a lot more power over the board then than he will now.
I have yet to find a product person that was not involved in the inception of the idea that is actually good (hell, even some founders fail spectacularly here).
Perhaps I'm simply unlucky.
Why did I say that? Look at the product release by the competitors these past few days. 2nd, Sam pushing for AI chips implies that chatGPT's future breakthroughs are hardware bounded. Hence, the road to AGI is not through chatGPT.
What do you mean? It recommends things that it thinks people will like.
Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.
They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.
The best they can hope for as an org is to live as long as they can as best as they can.
I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.
This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.
https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...
Oracle is going to get into EVs?
You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.
I don't have much in the way of credentials (I took one class on A.I. in college and have only dabbled in it since, and I work on systems that don't need to scale anywhere near as much as ChatGPT does, and while I've been an early startup employee a couple of times I've never run a company), but based on the past week I think I'd do a better job, and can fill in the gaps as best as I can after the fact.
And I don't have any conflicts of interest. I'm a total outsider, I don't have any of that shit you mentioned.
So yeah, vote for me, or whatever.
Anyway my point is I'm sure there's actually quite a few people who could do a likely a better job and don't have a conflict of interest (at least not one so obvious as investing in a direct competitor), they're just not already part of the Elite circles that would pretty much be necessary to even get on these people's radar in order to be considered in the first place. I don't really mean me, I'm sure there are other better candidates.
But then they wouldn't have the cachet of 'Oh, that guy co-founded Twitch. That for-profit company is successful, that must mean he'd do a good job! (at running a non-profit company that's actively trying to bring about AGI that will probably simultaneously benefit and hurt the lives of millions of people)'.
The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.
Like I would become immediately suspicious if food packaging had “real food” written on it.
This has been the case for all achievement of all major companies, the CEO or whoever is on top gets the credit for all their employee's work. Why would be different for OpenAI?
What does that even mean?
In any case, it's not OpenAI, it's Microsoft, and it has a long history of winning and bouncing back.
But he was also technical enough to have a pretty good feel for the complexity of tasks, and would sometimes jump in to help figure out some docker configuration issues or whatever problems we were having (mostly devops related) so the devs could focus on working on the application code. We were also a pretty small team, only a few developers, so that was beneficial.
He did such a good job that the business eventually reached out to him and hired him directly. He's now head of two of their product lines (one of them being the product I worked on).
But that's pretty much it. I can't think of any other product people I could say such positive things about.
Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/
we'll all likely never know what truly happened, but it's a shame that the board has lost their last remnant of some diversity and at the moment appears to be composed of rich Western white males... even if they rushed for profit, I'd have more faith in the potential upside what could be a sea change in the World, if those involved reflected more experiences than are currently gathered at that table.
The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.
I'd say the lack of a narrative from the board, general incompetence with how it was handled, the employees quitting and the employee letter played their parts too.
But even if it was Microsoft who made this happen: that's what happens when you have a major investor. If you don't want their influence, don't take their money.
But they did move forward with their threat and removed Sam as CEO with great reputational harm to the company. And now the board has been changed, with one less ally to Sam (Brockman no longer chairing the board). The move may not have ended up with the expected results, but this was much more than just a costly signal.
1. Did you really think the feds wouldn't be involved?
AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.
2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.
The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public
Not looking good for the “Open” part of OpenAI.
Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.
Which is utterly scary.
Gee wiz, almost… exactly like what is happening?
You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.
From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...
> This is what happened with Eric Schmidt on Apple’s board
Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.
Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.
There’s no perks to not signing.
Doing AI for ChatGPT just means you know a single model really well.
Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.
It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.
Why was his role as a CEO even challenged?
>It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
Always remember; Google wasn't the first search engine nor iPhone the first smartphone. First-movers bring innovation and trend not market dominance.
He was instrumental; threatened resignation unless the old board could provide evidence of wrongdoing
Corresponding Princess Bride scene: https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7
OpenAI is now just a tool used by Businesses. And they dont have a good history of benefitting humanity recently.
Now, yes, they definitely are.
IMO OpenAI’s governance is far less trustworthy today than it was yesterday.
Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.
No it's not. Microsoft didn't knew about this till minutes before the press release.
Investors are free to protest decisions against their principles and people are free to move away from their current company.
I can't believe I'm about to defend VCs and "senior management" but here goes.
I've worked for two start-ups in my life.
The first start-up had dog-shit technology (initially) and top-notch management. CEO told me early on that VCs invest on the quality of management because they trust good senior executives to hire good researchers and let them pivot into profitable areas (and pivoting is almost always needed).
I thought the CEO was full of shit and simply patting himself on the back. Company pivoted HARD and IPOed around 2006 and now has a MC of ~ $10 billion.
The second start-up I worked with was founded by a Nobel laureate and the tech was based on his research. This time management was dog-shit. Management fumbled the tech and went out of business.
===
Not saying Altman deserves uncritical praise. All I'm saying is that I used to diminish the importance of quality senior leadership.
No need for a conspiracy, everyones seen this in some aspect, it just gets worse when these people are throwing money around in the billions.
all you need to do is witness someone Like Elon musk to see how disruptive this type of thing is.
Especially with putting Larry Summers on the board with this tweet.
Or in Arthurian times. Very different values.
So as a journalist you might have freedom to write your articles, but your editor (as instructed by his/her senior editor) might try to steer you about writing in the correct tone.
This is how 'Starship test flight makes history as it clears multiple milestones' becomes 'Musk rocket explodes during test'
I think Sam came out the winner. He gets to pick his board. He gets to narrow his employees. If anything, this sets him up for dictatorship. The only other overseers are the investors. In that case, Microsoft came out holding a leash. No MS, means no Sam, which also means employees have no say.
So it is more like MS > Sam > employees. MS+Sam > rest of investors.
You seem to be equating AI with magic, which it is very much not.
Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.
https://www.wired.com/2014/04/dropbox-rice-controversy/
https://en.wikipedia.org/wiki/Theranos#Management
In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m
“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)
https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...
Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.
If you want to use that definition you might want to also add a criteria for minimum size of the company.
When 95% of your staff threatens to resign and says "you have made a mistake", that's when it's time to say "no, the very good reasons we did it are this". That didn't happen.
The public don't calculate into whats happening here. There's people using ChatGPT for real "business value" and _that_ is what was threatened.
It's clear Business Interests could not be stopped.
You're very emphatic in ignoring common sense. You don't need studies to see that almost all important contributions to mathematics, from Euclid to the present day, have come from men. I don't know if it's because of genetics, culture, or whatever, but it's the truth.
> you are being sexist [...] it’s racist and irrational [...]
Names have never helped discourse.
But I assume, from y our language, you'd also object to making this a government utility.
This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.
Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.
I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.
How about we look at credentials, merit, and consensus as opposed to “what gender are they?”
It is troubling because it shows that this “external” governance meant to make decisions for the good of humanity is unable to enforce decisions. The internal employees were obviously swayed by financial gain as well. I don’t think that I would behave differently were I in their shoes honestly. However, this does definitively mean that they are a product and profit driven group.
I think that Sam Altman is dishonest and a depressing example of what modern Americans idealize. He has all these ideals he preaches but will happily turn on if it upsets his ego. On top of that he is held up as some star innovator when in reality he built nothing himself. He just identified one potential technological advancement and threw money at it with all his billionaire friends.
Gone are the days of building things in a garage with a mission. Founders are no longer visionary engineers and designers. The path now is clear. Convince some rich folks you’re worthy of being rich too. When they adopt you into wealth you can start throwing shit at the wall until something sticks. Eventually something will and you can claim visionary status. Now your presence in the billionaire club is beyond reproach because you’re a “founder”.
Further where is the public accountability? I thought the board was to act in the interests of the public but they haven't communicated anything. Are we all just supposed to pretend this never happend and that the board will now act in the public interest?
We need regulations to hold these boards which hold so much power accountable to the public. No reasonable AI regulations can be made until the public are included in a meaningful way, anyone that pushes for regulations without the public is just trying to control the industry and establish a monopoly.
And it'll likely be doing it with very little input, and generate entire campaigns.
You can claim that "people" are the ones responsible for that, but it's going to overwhelm any attempts to stop it.
So yeah, there's a purpose to examine how these machines are built, not just what the output is.
The interesting thing is you used economic values to show their importance, not what innovations or changes they achieved. Which is fine for ordinary companies, but OpenAI is supposed to be a non-profit, so these metrics should not be relevant. Otherwise, what's the difference?
People here used to back up their bold claims with arguments.
A great analogy can be found on basketball teams. Lots of star players who should succeed sans any coach, but Phil Jackson and Coach K have shown time and again the important role leadership plays.
Not that it's really in need of additional evidence.
We've seen it in the past decade in multiple cases. That's safety.
The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.
That's bad, and unsafe.
You're doing the same thing except with finances. Non-profit doesn't mean finances are irrelevant. It simply means there are no shareholders. Non-profits are still businesses - no money, no mission.
I don't consider this confirmed. Microsoft brought an enormous amount of money and other power to the table, and their role was certainly big, but it is far from clear to me that they held all or most of the power that was wielded.
Of course you need the people who can deep dive and solve complex issues, none doubts that.
Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.
> If you want to use that definition you might want to also add a criteria for minimum size of the company.
Your feedback is noted.
Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?
It hasn't disrupted mine in any way. It may do that in the future, but the future isn't here yet.
I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.
So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.
If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
That's all. That's why government exists.
So the type of employee that would get hired at OpenAi isn't likely to be skilled at critical thinking? That's doubtful. It looks to me like you dislike how things played out, gathered together some mean adjectives and "groupthink", and ended with a pessimistic prediction for their trajectry as punishment. One is left to wonder what OAI's disruptor outlook would be if the outcome of the current situation had been more pleasing.
If it's really valuable to society, it needs to be a government entity, full stop.
It's different with engineering managers (or team leads, lead engineers, however you want to call it). When they leave, that's usually a bad sign.
Though also quite often when the engineering leaders leave, I think of it as a canary in the coal mine: they are closer to business, they deal more with business people, so they are the first to realize that "working with these people on these services is pointless, time to jump ship".
Recent OpenAI CEOs found themselves on the protagonist side not for their actions, but for the way they have been seemingly treated by the board. Regardless of actual actions on either side, "heroic" or not, of which the public knows very little.
More likely, they're a loss-leader and generating publicity by making it as cheap as possible.
_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?
And the controlling party de jour will totally not tweak it to side with their agenda, I'm sure. </s>
Developers and value creators with power are like an anti-trust on consolidation and concentration and they have instead turned towards authoritarianism instead of anti-authoritarianism. What happened? Many think they can still get rich, those days are over because of giving up power. Now quality of life for everyone and value creators is worse off. Everyone loses.
The OpenAI board just seems irrational, immature, indecisive, and many other stupid features you don’t want in a board.
I don’t see this so much as an “Altman is amazing” outcome so much as the board is incompetent and doing incompetent things and OpenAI’s products are popular and the boards actions put this products in danger.
Not that Altman isn’t cool, I think he’s smart, but I think a similar coverage would have occurred with any other ceo who was fired for vague and seemingly random reasons on a Friday afternoon.
Not to mention Roko's basilisk /s
I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.
Of course, if the product suite is clueless, nobody is going to miss them, usually it's better the have no dedicated product people, than having clueless product people.
I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?
What do you image a neutral party does? If youu're talking about safety, don't you think there should be someone sitting on a boar dsomewhere, contemplating _what should the AI feed today?_
Seriously, why is a non profit, or a business or whatever any different than a government?
I get it: there's all kinds of governments, but now theres all kind of businesses.
The point of putting it in the governments hand is a defacto acknowledgement that it's a utility.
Take other utilities, any time you give a prive org a right to control whether or not you get electricity or water, whats the outcome? Rarely good.
If AI is suppose to help society, that's the purview of the government. That's all, you can imagine it's the chinese government, or the russian, or the american or the canadian. They're all _going to do it_, thats _going to happen_, and if a business gets there first, _what is the difference if it's such a powerful device_.
I get it, people look dimly on governments, but guess what: they're just as powerful as some organization that gets billions of dollars to effect society. Why is it suddenly a boogeyman?
This is not about soap opera, this is about business and a big part is based on trust.
The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.
And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.
I can be that common man
Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.
We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.
So the alternative to great man theory, in this case, is terrible man theory... I'm not following.
If focusing on control over openai, is great man theory... What's the contrary notion?
My take is its not cheap to do what they are doing and adding a capped for-profit side is an interesting take. Afterall, OpenAI's mission clearly states that AGI is happening and if thats true, those profit caps are probably trivial to meet.
Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.
What the process did shoe is if you plan to oust a popular CEO with a thriving company, you should actually have a good reason for it. It’s amazing how little thought seemingly went into it for them.
Incrementally improving AI capabilities is the only way to do that.
What leads you to make such a definitive statement? To me the process shows that Microsoft has no pull in OpenAI.
When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.
Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.
The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.
Maybe (almost certainly) Sam is not a savior/hero, but he doesn't need to be a savior/hero. He just needs to gather more support than the opposition (the now previous board). And even if you don't know any details of this story, enough insiders who know more than any of us of what happens inside oai - including hundred of researchers - decided to support the "savior/hero". It's less about Sam and more about an incompetent board. Some of those board members are top researchers. And they are now on the losing camp.
I’m assuming most of the researchers there probably realize there is a loooot of money to be made and they have to optimize for that.
They are deffo pushing the frontier of AI.
However I wish OpenAI doesn’t get to AGI first.
I don’t think it will be the best for all of humanity.
I’m scared.
There ist the board the investors the employees the senior management.
All other parties aligned against it and thus it couldn’t act. If only Sam would have rebelled. Or even just Sam and the investors (without the employees) nothing would have happened.
The management skills which you potentiated differentiated the success of the two firms. I can see how the lack of this might be wildly spread out in academia.
Yes, you are right that the board had weak sauce reasoning for the firing (giving two teams the same project!?!).
That said, the other commenter is right that this is the beginning of the end.
One of the interesting things over the past few years watching the development of AI has been that in parallel to the demonstration of the limitations of neural networks has been many demonstrations of the limitations of human thinking and psychology.
Altman just got given a blank check and crowned as king of OpenAI. And whatever opposition he faced internally just lost all its footing.
That's a terrible recipe for long term success.
Whatever the reasons for the firing, this outcome is going to completely screw their long term prospects, as no matter how wonderful a leader someone is, losing the reality check of empowered opposition results in terrible decisions being made unchecked.
He's going to double down on chat interfaces because that's been their unexpected bread and butter up until the point they get lapped by companies with broader product vision, and whatever elements at OpenAI shared that broader vision are going to get steamrolled now that he's been given an unconditional green light until they jump ship over the next 18 months to work elsewhere.
For a company at the forefront of AI it’s actually very, very human.
Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.
Smeagol D’Angelo
"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".
Seriously, Businesses simply dont have the history that governments do. They're just as capable of violence.
https://utopia.org/guide/crime-controversy-nestles-5-biggest...
All you're identifying is "government has a longer history of violence than Businesses"
You can see this at the micro level in a scrum team between the scrummaster, the product owner, and the tech lead.
The other members of the board seemed to make their decision based on more personal reasons that seems to fit with Adams conflict of interest. They refused to communicate and only now accept any sort of responsibility for their actions and lack of plan.
Honestly Ilya is the only one of the 4 I would actually want still on the board. I think we need people who are willing to change direction based on new information especially in leadership positions despite it being messy, the world is messy.
If you look at who's running Google right now, you would be essentially correct.
- peer pressure
- group think
- financial motives
- fear of the unknown (Sam being a known quantity)
- etc.
So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.
If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.
The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.
Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.
We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.
Giving me a billion $ would be a net benefit to humanity as a whole
I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.
Incubation of senior management in US tech has reached singularity and only one person's up for the job. Doom awaits the US tech sector as there's no organisational ability other than one person able and willing to take the big complex job.
Or:
Sam's overvalued.
One or the other.
Corporations have no values whatsoever and their statements only mean anything when expressed in terms of a legally binding contract. All corporate value statements should be viewed as nothing more than the kind of self-serving statements that an amoral narcissitic sociopath would make to protect their own interests.
That's not the bar you are arguing against.
You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.
We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.
Sam was doing whatever he wanted, got caught, and now can continue to do what he wants with even more backing.
You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.
They dont really even really shill for their patron; they thrive on the relevance of having their name in the byline for the article, or being the person who gets quote / information / propaganda from <CEO|Celebrity|Criminal|Viral Edgelord of the Week>.
Government should have banned big tech investment in AI companies a year ago. If they want, they can create their own AI but buying one should be off the table.
Further, the current tech wave is all about AI, where there's a massive community of basically "OpenAI wrapper" grifters trying to ride the wave.
The shorter answer is: money.
I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.
I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert
We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.
They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.
People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.
But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.
But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.
It's a very influential essay.
In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.
The interim CEO said the board couldn’t even tell him why the old CEO was fired.
Microsoft said the board couldn’t even tell them why the old CEO was fired.
The employees said the board couldn’t explain why the CEO was fired.
When nobody can even begin to understand the board’s actions and they can’t even explain themselves, it’s a recipe for losing confidence. And that’s exactly what happened, from investors to employees.
Its always written by PR people with marketing in mind
Three was the compromise I made with myself.
Frankly these EA & e/acc cults are starting to get on my nerves.
And there’s a difference between, “an explanation would help their credibility” versus “a lack of explanation means they don’t have a good reason.”
Nobody cares, except shareholders.
I suspect incentives play a huge role here. OAI employees are compensated with stock in the for-profit arm of the company. It's obvious that the board's actions put the value of that stock in extreme jeopardy (which, given the corporate structure, is theoretically completely fine! the whole point of the corporate structure is that the nonprofit board has the power to say "yikes, we've developed an unsafe superintelligence, burn down the building and destroy the company now").
I think it's natural for employees to be extremely angry with a board decision that probably cost them >$1M each.
Average people don't like to lie, if someone bullies them until they agree to sign they will sign because they are honest.
Also if they said they will sign but the ticker didn't go up, it is pretty obvious that they lied and I'm sure they don't want that risk.
The employees of a tech company banded together to get what they wanted, force a leadership change, evict the leaders they disagreed with, secure the return of the leadership they wanted, and restored the value of their hard-earned equity.
This certainly isn’t a disappointing outcome for the employees! I thought HN would be ecstatic about tech employees banding together to force action in their favor, but the comments here are surprisingly negative.
Altman is past borderline.
So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's
If openAI is a huge mono-culture of thinking then they have bigger problems most likely
1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.
2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.
3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.
4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.
I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.
Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.
Voter approval is actually usually much less unanimous, as far as I can tell.
Money is just a way to value things relative to other things. It's not interesting to value something using money.
I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.
And everywhere. You've only named public institutions for some reason, but a lot of progress happens in the private sector. And that demonstrates real commitment, because they're not spending other people's money.
I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).
I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.
Initially, when the idea is small, it is hard to sell it to talent, investors and early customers to bring all key pieces together.
Later, when the idea is well recognized and accepted, the organization usually becomes big and the challenge shifts to understanding the complex interaction of various competing sub-ideas, projects and organizational structures. Humans did not evolve to manage such complex systems and interacting with thousands of stakeholders, beyond what can be directly observed and fully understood.
However, without this organization, engineers, researchers, etc cannot work on big audacious projects, which involve more resources than 1 person can provide by themselves. That's why the skill of organizing and leading people is so highly valued and compensated.
It is common to think of leaders not contributing much, but this view might be skewed because of mostly looking at executives in large companies at the time they have clear moats. At that point leadership might be less important in the short term: product sells itself, talent is knocking on the door, and money is abundant. But this is an unusual short-lived state between taking an idea off the ground and defending against quickly shifting market forces.
I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.
95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.
So it looks like a VERY normal company.
Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."
Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.
There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.
However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.
It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.
FYIW I think all the big text have powerful plays available.. including keeping powder dry.
No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.
That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.
You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.
Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.
It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.
Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.
Quality senior leadership is, indeed, very important.
However, far, far too many people see "their company makes a lot of money" or "they are charismatic and talk a good game" and think that means the senior leadership is high-quality.
True quality is much harder to measure, especially in the short term. As you imply, part of it is being able to choose good management—but measuring the quality of management is also hard, and most of the corporate world today has utterly backwards ideas about what actually makes good managers (eg, "willing to abuse employees to force them to work long hours", etc).
Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.
The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?
That said, I wish Helion wasn't so paranoid about Chinese copycats and was more open about their tech. I can't help but feel Sam Altman is at least partly responsible for that.
If you were to ask Altman himself though im sure he would highlight the true innovators of AI that he holds in high respect.
DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.
On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.
Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.
This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.
Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.
They fired the CEO and didn't even inform Microsoft, who had invested a massive $20 billion. That's a serious lapse in judgment. A company needs leaders who understand business, not just a smart researcher with a sense of ethical superiority. This move by the board was unprofessional and almost childish.
Those board members? Their future on any other board looks pretty bleak. Venture capitalists will think twice before getting involved with anything they have a hand in.
On the other side, Sam did increase the company's revenue, which is a significant achievement. He got offers from various companies and VCs the minute the news went public.
The business community's support for Sam is partly a critique of the board's actions and partly due to the buzz he and his company have created. It's a significant moment in the industry.
I think that's what may be in the minds of several people eagerly watching this eventually-to-be-made David Fincher movie.
I could not convince them that this was actually evidence in favor of Coach K being an exceptional coach.
I wouldn't be so sure. While I think the board handled this process terribly, I think the majority of mainstream media articles I saw were very cautionary regarding the outcome. Examples (and note the second article reports that Paul Graham fired Altman from YC, which I never knew before):
MarketWatch: https://www.marketwatch.com/story/the-openai-debacle-shows-s...
Washington Post: https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
That phrase is nothing more than a dissimulated way of saying “tough luck” or “I don’t care” while trying to act (outdatedly) cool. You don’t need to have grown up in any specific decade to understand its meaning.
From where I sit Satya possibly messed up big. He clearly wanted Sam and the Open AI team to join microsoft and they won't now, likely ever.
By offering a standing offer to join MS publicly he gave Sam and OpenAI employees huge leverage to force the board's hand. If he had waited then maybe there would have been an actual fallout that would have lead to people actually joining microsoft.
That’s it. Nonprofit corporations are still corporations in every other way.
That has nothing to do with AI though.
You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.
And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.
- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products
- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs
- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well
---
Sam Altman's Leadership
- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight
- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned
---
Board Issues
- The board acted incompetently and destructively without clear reasons or communication
- The new board seems more reasonable but may struggle to govern given Sam's power
- There are still opposing factions on ideology and commercialization that will continue battling
---
Employee Motivations
- Employees followed the money trail and Sam to preserve their equity and careers
- Peer pressure and groupthink likely also swayed employees more than principles
- Mission-driven employees may still leave for opportunities at places like Anthropic
---
Safety vs Commercialization
- The safety faction lost this battle but still has influential leaders wanting to constrain the technology
- Rapid commercialization beat out calls for restraint but may hit snags with model issues
---
Microsoft Partnership
- Microsoft strengthened its power despite not appearing involved in the drama
- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity
I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.
https://twitter.com/coloradotravis/status/172606030573668790...
A good leader is someone you'll follow into battle, because you want to do right by the team, and you know the leader and the team will do right by you. Whatever 'leadership' is, Sam Altman has it and the board does not.
https://www.ft.com/content/05b80ba4-fcc3-4f39-a0c3-97b025418...
The board could have said, hey we don't like this direction and you are not keeping us in the loop, it's time for an orderly change. But they knew that wouldn't go well for them either. They chose to accuse Sam of malfeasance and be weaselly ratfuckers on some level themselves, even if they felt for still-inscrutable reasons that was their only/best choice and wouldn't go down the way it did.
Sam Altman is the front man who 'gave us' ChatGPT regardless of everything else Ilya and everyone else did. A personal brand (or corporate) is about trust, if you have a brand you are playing a long-term game, a reputation converts prisoner's dilemma into iterated prisoner's dilemma which has a different outcome.
It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.
Ironically, it snuffs out diversity among companies at a 40k foot level.
All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.
You can get big salaries; and to push the money outside it's very simple, you just need to spend it through other companies.
Additional bonus with some structures: If the co-investors are also the donators to the non-profit, they can deduct these donations from their taxes, and still pocket-back the profit, it's a double-win.
No conspiracy needed, for example, it's very convenient that MSFT can politely "influence" OpenAI to spend back on their platform a lot of the money they gave to the non-profit back to their for-profit (and profitable) company.
For example, you can create a chip company, and use the non-profit to buy your chips.
Then the profit is channeled to you and your co-investors in the chip company.
Oooh, yeah. "Must have".
Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.
Participating in that is assimilation.
Absolutely. The focus on the leadership of OpenAI isn't because people think that the top researchers and scientists are unimportant. It's because they realize that they are important, and as such, the person who decides the direction they go in is extremely important. End up with the wrong person at the top, and all of those researchers and scientists end up wasting time spinning wheels on things that will never reach the public.
Sure, you can talk about results in terms of their monetary value but it doesn’t make sense to think of it in terms of the profit generated directly by the actor.
For example Pfizer made huge profits off of the COVID-19 vaccine. But that vaccine would never have been possible without foundational research conducted in universities in the US and Germany which established the viability in vivo of mRNA.
Pfizer made billions and many lives were saved using the work of academics (which also laid the groundwork for future valuable vaccines). The profit made by the academics and universities was minimal in comparison.
So, whose work was more valuable?
While having OpenAI as a Microsoft DeepMind would have been an ok second-best solution, the status quo is still better for Microsoft. There would have been a bunch of legal issues and it would be a hit on Microsoft's bottom line.
I don't personally like him, but I must admit he displayed a lot more leadership skills than I'd recognize before.
It's inherently hard to replace someone like that in any organization.
Take Apple, after losing Jobs. It's not that Apple was a "weak" organization, but really Jobs that was extraordinary and indeed irreplaceable.
No, I'm not comparing Jobs and Sam. Just illustrating my point.
Was it a mistake to create OpenAI as a public charity?
Or was it a mistake to operate OpenAI as if it were a startup?
The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
I would expect people with backgrounds like Sheryl Sandberg or Dr. Lisa Sue to sit in the position. The two replaced women would have looked like diversity hires had they not been affiliated with an AI doomer organization.
I hope there’s diversity of representation as they fill out the rest of the board and there’s certainly women who have the credentials, but it’s important that they don’t appear grossly unqualified when they sit next to the other board members.
Eventually you need to expand, despite some risk, to push the testing forward.
Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".
When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.
Not-validated, unsigned letter [1]
>>All companies are monocultures
yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.
yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"
[1] https://wccftech.com/former-openai-employees-allege-deceit-a...
Are they all in the same “tribe”? Maybe you should enlarge the definition?
How about us all IT people who watched the drama unfolding on Twitter while our friend are using FB and Insta, we are far from SV and have mixed feelings about Elon Musk while never in a million years wanting to be like him? Also same “tribe”?
Microsoft can and will be using GPT4 as soon as they get a handle on it, and if it doesn't boil their servers to do so. If you want deceleration you would need someone with an incentive that didn't involve, for example, being first to market with new flashy products.
So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
I emphasized product because OpenAI may have great technology. But any product they sell is going to require mass compute and a mass sales army to go into the “enterprise” and integrate with what the enterprise already has.
Guess who has both? Guess who has neither?
And even the “products” that OpenAI have now can only exist because of mass subsidies by Microsoft.
Unless they had something in their “DNA” that allowed them to build enough compute and pay their employees, they were never going to “win” without a mass infusion of cash and only three companies had enough compute and revenue to throw at them and only two companies had relationships with big enterprise and compute - Amazon and Microsoft.
Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.
Did someone took the pen from the writers? Go ahead and write whatever you want.
It was an example of a constraint a company might want to enforce in their AI.
There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.
I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.
1. Keep doing your work, and focus on building your product. 2. Ignore the noise, go back to 1.
What makes this "likely"?
Or is this just pure conjecture?
Elon Musk’s neuralink is a good example - the work they’re doing there was attacked by academics saying they’d done this years ago and it’s not novel, yet none of them will be the ones who ultimately bring it to market.
"Because someone acts differently than I expected, they must lacks of critical thinking."
Are you an insider? If not, have you considered that perhaps OpenAI employees are more informed about the situation than you?
It is sort of strange that our communal reaction is to say "well this board didn't act anything like a normal corporate board": of course it didn't, that was indeed the whole point of not having a normal corporate board in charge.
Whatever you think of Sam, Adam, Ilya etc, the one conclusion that seems safe to reach is that in the end, the profit/financial incentives ended up being far more important than the NGOs mission, no matter what legal structure was in place.
Which might have an oversight from AMZN instead of MSFT ?
Lol HN lawyering is hilarious.
Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement
It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.
Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?
Sure I’ll keep using ChatGPT in a personal capacity/as search. But no way I’d trust my business to them
From https://openai.com/our-structure
- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
It's absolutely believable that at first he thought the best way to safeguard AI was to get rid of the main advocate for profit-seeking at OpenAI, then when that person "fell upward" into a position where he'd have fewer constraints, to regret that decision.
The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.
Did you think OP meant there was some inherent conflict of interest with charities?
Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?
Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?
Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.
Explain how an MS employee would have greater conflict of interest.
And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.
> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
This is a bit jumbled. How do you think "control as utility" would help? What would it help with?
The comment you responded to made neither of those claims, just that they were "involved".
But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.
Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.
Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.
Self-disclosure: I work for a megacorp.
Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.
Whether or not that's "better" or "stronger" is up to individual interpretation.
It also has confirmed that greed and cult of personality win in the end.
https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
Example: Put a loser as CEO of a rocket ship, and there is a huge chance that the company will still be successful.
Put a loser as CEO of a sinking ship, and there is a huge chance that the company will fail.
The exceptional CEOs are those who turn failures into successes.
The fact this drama has emerged is the symptom of a failure.
In a company with a great CEO this shouldn’t be happening.
Usually publicly owned things end up being controlled by someone: a CEO, a main investor, a crooked board, a government, a shady governmental organization. At least with Elon owning X, things are a little more transparent, he’s rather candid where he stands.
Now, the question is “who owns Musk?” of course.
While this tech has the ability to replace a lot of jobs, it has likely the ability to replace a lot of companies.
"A cult follower does not make an exceptional leader" is the one you are looking for.
Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]
> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.
[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:
https://www.statista.com/statistics/1219257/us-employment-ra...
> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:
"We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_CokeAll these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of: 1. PR pieces being pushed by unknown entities 2. positive endorsements from well known people who are likely know him
Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.
Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?
The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).
If you don't think there would be a shareholder revolt against the board, for simply exercising their most fundamental right to fire the CEO, I think you're missing part the picture.
First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.
But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.
I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.
Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???
There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.
Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."
Apple and Microsoft even have the strongest financial results in their lifetime.
They're very orthogonal things.
Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.
The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back
+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).
The toothpaste is out of the tube, but this tech will radically change the world.
And if they do prefer it as a for profit company, why would that make them morally bankrupt?
Having no leadership at all guarantees failure.
Still a good deal, but your accounting is off.
https://www.irs.gov/charities-non-profits/charitable-organiz...
I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.
I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.
The idea that the marketplace is a meritocracy of some kind where whatever an individual deems as "merit" wins is just proven to be nonsense time and time again.
https://www.thecrimson.com/article/2023/5/5/epstein-summers-...
I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed
[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
And while also working for a for-profit company.
Additionally - I have not seen someone else talk about this, its just been a few days. Calling it a narrative is a stretch, and dismissive by implying manipulation.
Finally why would Sam joining MSFT be better than this current situation?
Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet
It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.
My guess is that he has less than a year, based on the my assumption that there will be constant pressure placed on the board to oust him.
It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?
Toner got her board seat because she was basically Holden Karnofsky's designated replacement:
> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.
> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
Instead he will come away with this untouchable. He’ll get to stack the board like he wanted to. Part of being on a board of directors is sticking to your decisions. They are weak and weren’t prepared for the backlash of one person.
Bigger picture, I don't think the "money/VC/MSFT/commercialization faction destroyed the safety/non-profit faction" is mutually exclusive with "the board fucked up." IMO, both are true
But all that seems a lot more like an episode of Succession and less like real life to be honest.
The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".
There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)
1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...
OpenAI is a charity nonprofit, in fact.
> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.
OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.
And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.
I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.
Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.
Good for them.
Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.
OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.
OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)
What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.
Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.
- It can't plan
- It can't do arithmetic
- It can't reason
- It can approximately retrieve knowledge with a natural language query (there are some issues with this, but it's very good)
- It can encode data into natural languages and other modalities
I'm not worried about it, I am worried about how badly people have misunderstood what it can do and then attempted to use it for things that matter.
But I'm not surprised.
Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.
To that end, observing unanimous behavior may imply some bias.
Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.
I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.
There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.
I don't like ignorance being promoted under the cloak of not causing offense. It causes more harm than good. If there's a societal problem, you can't tackle it without knowing the actual cause. Sometimes the issue isn't an actual problem caused an 'ism,' it's just biology, and it's a complete waste of resources trying to change it.
The logic being that if any opinion has above X% support, people are choosing it based on peer pressure.
Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?
Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.
DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.
Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.
Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.
Do you agree that the following company pairs are competitors?
* FB : TikTok
* TikTok : YT
* YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix....
To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.
So clearly this wasn't a 50/50 coin flip.
The question at hand is whether the skew against the board was sincere or insincere.
Personally, I assume that people are acting in good faith, unless I have evidence to the contrary.
It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.
I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.
I am sure that is true. But the for-profit uses IP that was developed inside of the non-profit with (presumably) tax deductible donations. That IP should be valued somehow. But, as I said, I am sure they were somehow able to structure it in a way that is legal, but it has an illegal feel to it.
I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.
And there is also a class of people that resist all moderation on principle even when it's ultimately for their benefit. See, Americans whenever the FDA brings up any questions of health:
* "Gas Stoves may increase Asthma." -> "Don't you tread on me, you can take my gas stove from my cold dead hands!"
Of course it's ridiculous - we've been through this before with Asbestos, Lead Paint, Seatbelts, even the very idea of the EPA cleaning up the environment. It's not a uniquely American problem, but America tends to attract and offer success to the folks that want to ignore these on principles.
For every Asbestos there is a Plastic Straw Ban which is essentially virtue signalling by the types of folks you mention - meaningless in the grand scheme of things for the stated goal, massive in terms of inconvenience.
But the existence of Plastic Straw Ban does not make Asbestos, CFCs, or Lead Paint any safer.
Likewise, the existence of people that gravitate to positions of power and middle management does not negate the need for actual moderation in dozens of societal scenarios. Online forums, Social Networks, and...well I'm not sure about AI. Because I'm not sure what AI is, it's changing daily. The point is that I don't think it's fair to assume that anyone that is interested in safety and moderation is doing it out of a misguided attempt to pursue power, and instead is actively trying to protect and improve humanity.
Lastly, your portrayal of journalists as power figures is actively dangerous to the free press. This was never stated this directly until the Trump years - even when FOX News was berating Obama daily for meaningless subjects. When the TRUTH becomes a partisan subject, then reporting on that truth becomes a dangerous activity. Journalists are MOSTLY in the pursuit of truth.
His second wife apparently asked him to buy Twitter and fix its, in her opinion, liberal bias.
There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.
In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.
Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth
Were you watching a different show than the rest of us?
There are two hard problems: naming things, cache invalidation, and off-by-one errors.
I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.
Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly
A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.
The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.
(But yes; what you describe is absolutely happening left and right...)
People concerned about AI safety were probably not going to join in the first place...
I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.
“Do you know where you are right now?”
There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.
> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive
No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.
But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.
If they were super-duper worried about how Sam was going to cause a global extinction event with AI, or even just that he was driving the company in too commercial of a direction, they should have said that to everyone!
The idea that they could fire the CEO with a super vague, one-paragraph statement, and then expect 800 employees who respect that CEO to just... be totally fine with that is absolutely fucking insane, regardless of the board's fiduciary responsibilities. They're board members, not gods.
Did you read the bylaws? They have no responsibility to do any of that.
Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.
They did notify everyone. They did it after firing which is within their rights. They may also choose to stay silent if there is legitimate reason for it such as making the reasons known may harm the organization even more. This is speculation obviously.
In any case they didn't omit doing anything they need to and they didn't exercise a power they didn't have. The end result is that the board they choose will be impotent at the moment, for sure.
Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
You forgot to do Oracle and Tesla.
People will contort themselves into pretzels to invent rationalizations.
(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)
I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.
Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.
One developer (Ilya) vs. One businessman (Sam) -> Sam wins
Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win
From the outside it looks like developers held the power all along ... which is how it should be.
I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.
> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.
https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...
> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.
https://variety.com/2023/biz/news/wga-ratify-contract-end-st...
It's a post-"Don't be evil" world today.
1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.
2. Sam approved each hire in the first place.
3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".
He backed it and then signed the pledge to quit if it wasn't undone.
What's the evidence he was behind it and not D'Angelo?
That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.
Employees, customers, government.
If motivated and aligned, any of these three could end you if they want to.
Do not wake the dragons.
You could tell the same story about a rising sports team replacing their star coach, or a military sacking a general the day after he marched through the streets to fanfare after winning a battle.
Even without the money involved, a sudden change in leadership with no explanation, followed only by increasing uncertainty and cloudy communication, is not going to go well for those who are backing you.
Even in the most altruistic version of OpenAI's goals I'm fairly sure they need employees and funding to pay those employees and do the research.
not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.
https://news.ycombinator.com/item?id=38375239&p=2
Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.
It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.
He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.
No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it. https://bower.sh/in-love-with-a-ghost
I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.
Right now, quota is very valuable and scarce, but credits are easy to come by. Also, Azure credits themselves are worth about $0.20 per dollar compared to the alternatives.
Is this still true when the board gets overhauled after trying to uphold the moral compass.
Like, nobody is going to arrest you for spitting on the street especially if you're an old grandpa. Nobody is going to arrest you for saying nasty things about somebody's mom.
You get my point, to some boundary both are kinda within somebody's rights, although can be suable or can be reported for misbehaving. But that's the keypoint, misbehavior.
Just because something is within your rights doesn't mean you're not misbehaving or not acting in an immature way.
To be clear, Im not denying or agreeing that the board of directors acted in an immature way. I'm just arguing against the claim that was made within your text that just because someone is acting within their rights that it's also a "right" thing to do necessary, while that is not the case always.
https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...
Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.
At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.
does it mean it's right or professional?
getting your point, but i hope you get the point i make as well, that just because you have no responsibility for something doesn't mean you're right or not unethical for doing or not doing that thing. so i feel like you're losing the point a little.
like, you get me, the board of directors is not the only actual power within a company, and that was proven by the whole scandal of Sam being discarded/fired that was made by the developers themselves. they also have the right to exercise their right to just not work at this company without the leader they may had liked.
I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.
Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?
One developer (Woz) vs One businessman (Jobs) -> Jobs wins
I didn't say anything about higher order values. Getting people to want what you want, and do what you want is a skill.
Hitler was an extraordinary leader. That doesn't imply anything about higher values.
If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.
We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.
Instead it often sounds like “it’s very unusual for the front to fall off”.
I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.
Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!
My job also secures my loyalty and support with a financial incentive. It is probably the most common way for a business leader to align interests.
Kings reward dukes, and generals pay soldiers. Politicians trade policies. That doesn't mean they arent leaders.
As it's within the board's rights to hire or fire people like Sam or the developers.
This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.
I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee
I still believe in the theory that Altman was going hard after profits. Both McCauley and Toner are focused on the altruistic aspects of AGI and safety. Altman shouldn't be at OpenAI and neither should D’Angelo.
Seeing a bug in your comment here:
You reference the pages like this:
https://news.ycombinator.com/item?id=38375239?p=2
The second ? should be an & like this:
https://news.ycombinator.com/item?id=38375239&p=2
Please feel free to delete this message after you've received it.
Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?
I'd bet more than half the people are just there for the money.
Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.
Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.
All this proved is that you can't take a major action that is deeply unpopular with employees, without consulting them, and expect to still have a functioning organization. This should be obvious, but it apparently never crossed the board's mind.
I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.
In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.
It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.
In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.
One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(
Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.
> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.
This was said loud and clear when Microsoft joined in the first place but there were no takers.
citation?
"Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past" [1]
HN plans to be multi-core?!?! A bigger scoop than OpenAI governance!
Anything more you can share?
[1] >>38351005
What about this is apparent to you?
What statement has the board made on how they fired Altman "for the mission"?
Have I missed something?
1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market
2) this is exactly the reasoning behind nuclear arms races
They may choose to, and they did choose to.
But it was an incompitant choice. (Obviously.)
I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.
How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.
most certainly would have still taken place; no one cares about how it was done; what they care about it being able to make $$; and it was clearly going to not be as heavily prioritized without Altman (which is why MSFT embraced him and his engineers almost immediately).
> notified their employees and investors they did notify their employees; they have fiduciary duty to investors as a nonprofit.
Here lies the body of William Jay,
Who died maintaining his right of way –
He was right, dead right, as he sped along,
But he's just as dead as if he were wrong.
- Dale CarnegieBut why would anyone expect 800 people to risk their livelihoods and work without a little serious justification? This was an inevitable reaction.
Imagine arguing this in another context: "Man, if only the Supreme Court had clearly articulated its reasoning in overturning Roe v Wade, there wouldn't have been all this outrage over it."
(I'm happy to accept that there's plenty of room for avoiding some of the damage, like the torrents of observers thinking "these board members clearly don't know what they're doing".)
But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.
The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.
> Our primary fiduciary duty is to humanity.
Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.
To be updated as more evidence rolls in.
It isn't that clear. People missing ui elements they have to scroll to is one of the most common ways of missing ui elements.
Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981
After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.
Sure, they don't have to. How did that work out?
Four CEOs in five days, their largest partner stepping in to try to stop the chaos, and almost the entirety of their employees threatening to leave for guaranteed jobs at that partner if the board didn't step down.
Note that the response is Altman's, and he seems to support it.
As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.
OTOH, The precautionary principle is too cautious.
There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.
This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.
The board seemed to have the confidence of none of the groups they needed confidence from.
https://en.wikipedia.org/wiki/United_States_government_role_...
If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.
Ilya is also not a developer, he's a founder of OpenAI and was the CSO.
There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.
If people are easily replaceable then they don’t hold nearly as much power, even en mass.
At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.
If I was an investor. I would be scared.
GP didn't speak of betraying people; he spoke of betraying their own statements. That just means doing what you said you wouldn't; it doesn't mean anyone was stabbed in the back.
Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.
Either by keeping OpenAI as-is, or the alternative being moving everyone to Microsoft in an attempt to keep things going would work for Satya.
Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.
Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.
Is Donald Trump allowed to run a charity in New York?
This is still making the same assumption. Why are you assuming they are acting outside of self-interest?
The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.
Developer platform updates seem to be inline.
And in any case, the board also failed to specify how their action furthered the mission of the company.
From all appearances, it appeared to damage the mission of the company. (If for no other reason that it dissolve the company and gave everything to MSFT.)
The simplest is pretty easy to articulate and weigh.
If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.
The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.
Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.
For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?
And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?
If Altman was silent and/or said something like "people take some time off for Thanksgiving, in a week calmer minds will prevail" while negotiating behind the scenes, OpenAI would look a lot less dire in the last few days. Instead he launched a public pressure campaign, likely pressured Mira, got Satya to make some fake commitments, got Greg Bockman's wife to emotionally pressure Ilya, etc.
Masterful chess, clearly. But playing people like pieces nonetheless.
Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?
Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.
> it needs a certain socio-economic response and so forth.
Absent large interventions, this will happen.
> Are children equally demoralized about additions
Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.
> Is there a way to counter the demoralization?
We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.
OTOH, these arguments become much less true if cheap AGI shows up.
And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.
So they allied with Helen to countercoup Greg/Sam.
I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.
I disagree. Yes, Sam may have when it OpenAI was founded (unless it was just a ploy), but certainly now it's clear that the big companies are on a race to the top and safety or guardrails are mostly irrelevant.
The primary reason that the Anthropic team left OpenAI was over safety concerns.
Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.
He is explicitly saying they don’t compete. And they don’t.
https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation
There are other similar examples like Ikea.
But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.
But future signees are influenced by previous signees.
Acting in good faith is different from bias.
(It's also totally possible FTX still has everyone's money. They own a lot of Anthropic shares that are really valuable. But he's still been convicted because of all the fraud they did.)
Still think D'Angelo wasn't the power player in the room?
I also have a Twitter account. Guess my opinion on the current or former Twitter CEOs?
You mean like they already do on Amazon Bedrock?
If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.
At this point I suspect you are being deliberately obtuse. Have a good day.
His voting power will get diluted as they add the next six members, but again, all three of them are going to decide who the next members are going to be.
A snippet from the recent Bloomberg article:
>A person close to the negotiations said that several women were suggested as possible interim directors, but parties couldn’t come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, *but deemed to be too close to Altman*, this person said.
Say what else you want about it, this is not going to be a board automatically stacked in Altman's favor.
I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.
[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.
That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.
How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?
> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.
The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?
Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).
"Failure" in this context essentially means arriving at a materially suboptimal outcome. Leaders in this situation, can easily be considered "irreplaceable" particularly in the early stages as decisions are incredibly impactful.
Cmon. There’s absolutely no evidence for that and you are just projecting an issue into the situation, rather than it being of any reality.
It’s sickening.
It was capital and the pursuit of more of it.
It always is.
Now this "initial board", tasked with establishing the rest of the board, for a company that wants to create AGI for the benefit of humanity, consists of three white alpha-males. That's just a fact. Is it a coincidence? Of course not.
Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.
What story? Any link?
Explicit planning with discrete knowledge is GOFAI and I think isn't workable.
There is whatever's going on here: https://x.com/natolambert/status/1727476436838265324?s=46
But whether it is deserved or not, it is never the question when congratulating a CEO for an achievement.
"Raising private investment allows a non profit to shift cost and risk to other entities."
for a suggestion of that.
Can you explain this further? So Microsoft pays $X to OpenAI, then OpenAI uses a lot of energy and hardware from Microsoft and the $X go back to Microsoft. How does Microsoft gain money this way?
> it seems that Helen was picked by Holden to take his seat.
So you can only speculate as to how she got the seat. Which is exactly my point. We can only speculate. And it's a question worth asking, because governance of America's most important AI company is a very important topic right now.
Furthermore, being removed from the board while keeping a role as chief scientist is different from being fired from CEO and having to leave the company.
I strongly disagree with that. If that was their motivation, then why is it not open-sourced? Why is it hardcoded with prudish limitations? That is the direct opposite of open and free (as in freedom) to me.
Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.
For example, let's say I'm a big for-profit selling shovels. You're a naive non-profit who needs shovels to build some next gen technology. Turns out you need a lot of shovels and donations so far haven't cut it. I step in and offer to give you all the shovels you need, but I want special access to what you create. And even if it's not codified, you will naturally feel indebted to me. I gain huge upside for just my marginal cost of creating the shovels. And, if I gave the shovels to a non-profit I can also take tax write-offs at the shovel market value.
TBH, it was an amazing move by MS. And MS was the only big cloud provider who could have done it b/c Sataya appears collaborative and willing to partner. Amazon would have been an obvious choice, but they don't partnership like that and instead tend to buy companies or repurpose OSS. And Google can't get out of their own way with their hubris.
If OpenAI is struggling to hard with the corporate alignment problem, how are they going to tackle the outer and inner alignment problems?
This whole conversation has been full of appeals to authority. Just because us tech people don't know some of these names and their accomplishments, we talk about them being "weak" members. The more I learn, the more I think this board was full of smart ppl who didn't play business politics well (and that's ok by me, as business politics isn't supposed to be something they have to deal with).
Their lack of entanglements makes them stronger members, in my perspective. Their miscalculation was in how broken the system is in which they were undermined. And you and I are part of that brokenness even in how we talk about it here
So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.
Q: What's AGI?
A: When the machine wakes up and asks, "What's in it for me?"
- - - -
So long, and thanks for all the fish.
I would have guessed the stunt was to hide the switch from sugar to High Fructose Corn syrup.
Case in point, the new AI laws like the EU AI act will outlaw *all* software unless registered and approved by some "authority".
The result will be concentration of power, wealth for the few, and instability and poverty for everyone else.
If that were the case, would they not have presented the new CEO immediately for an “orderly transition”? As I understand it, Ms Murati tried to get Altman back, and when she pressured the board, they tried at least two other possible CEOs before settling on Mr Shear, who also threatened to leave if they could not give evidence of a legal reason for firing Altman. It smells like a personality conflict.
I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.
Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...
The base non-RLHF GPT models could do translation by prefixing by the target language and a semi colon, but only above a certain amount of parameters are they consistent. GPT-2 didn't always get it right and of course had general issues with continuity. However, you could always do some parts of translation with older transformer models like BERT, especially multilingual ones.
Larger models across different from-base training runs show that they become more effective at translation at certain points, but I think this is about the capacity to store information, not emergence per say (if you understand my difference here). You've probably noticed and it has always seemed to me 4B, 6B and 9B are the largest rough parameter sizes with 2020 style training set ups that you see the most general "appearance" of some useful behaviours that you could "glean" from the web and book data that doesn't include instructions, while consistency seems to remain the domain of larger models or mixed expert models and lots of RLHF training/tricks. The easiest way to see this is to compare GPT-2 large, GPT-J and GPT-20B and see how well they perform at different tasks. However the fact it's about size in these GPTs, and yet smaller models (T5 instruction tuned / multilingual BERT) can perform at the same level on some tasks implies that it is about what the model is focusing it's learning on for the training task at hand, and controllable, rather than being innate at a certain parameter size. Language translations just do make up a lot of the data. I don't think it would emerge if you removed all cases of translation / multi language input/outputs, definitely not at the same parameter size, even if you had the same overall proportion of languages in the training corpus, if that makes sense? It just seems too much an artefact of the corpus aligning with the task.
Likewise for code - Gpt-4 generated code is not like arithmetic in the sense of the way people might mean it for code (e.g. branching instructions / abstract syntax tree) - its a fundamentally local text form of generation, this is why it can happily add illegal imports etc to diffs (perhaps one day training will resolve this) - it doesn't have the AST or compiler or much consistent behaviour to imply it deeply understands as it writes the code what could occur.
However if recent reports about arithmetic being an area of improvement are true, I am very excited, as a lot of what I wrote above - will have to be reconceptualised... and that is the most exciting scenario...
Microsoft wanted to catch up quickly so instead of training the LLM itself, they relied on prompt engineering. This involved pre-loading each session with a few dozen rules about it's behaviour as 'secret' prefaces to the user prompt text. We know this because some users managed to get it to tell them the prompt text.
Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.
Because what you just described would happen the same way with a for-profit company, no?
Because the answer is: Yes, it seems utterly instrumental.