It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
This line of argument is facile and destructive to conversation anyway.
It boils down to, "Pointing out corporate hypocrisy isn't valuable because corporations are liars," and (worse) it implies the other person is naive.
In reality, we can and should be outraged when corporations betray their own statements and supposed values.
That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.
From here on out there is going to be far more media scrutiny on who gets picked as a board member, where they stand on the company's policies, and just how independent they really are. Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.
The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests. The naïveté from the NPO faction was believing they’d be able to develop these capacities outside the strict control of the military industrial complex when AI has been established as part of the new Cold War with China.
For me, the whole thing is just human struggle. It is about fighting for people they love and care, against some people they dislike or indifferent to.
Anthropic formed from people who split from OpenAI, and xAI in response to either the company or ChatGPT, so people would have plenty of options.
If the staff had as little to go on as the rest of us, then the board did something that looked wild and unpredictable, which is an acute employment threat all by itself.
From a 2016 New Yorker article:
> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
This meme was already dead before the recent events. Whatever the company was doing, you could say it wasn’t open enough.
> a real disruptor must be brewing somewhere unnoticed, for now
Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years? It has been the most high profile tech innovator recently.
> OpenAI does not have in its DNA to win
This is so vague. What does it not have in its… fundamentals? And what is to “win”? This statement seems like just generic unhappiness without stating anything clearly. By most measures, they are winning. They have the best commercial LLM and continue to innovate, they have partnered with Microsoft heavily, and they have so far received very good funding.
[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)
Their bank accounts current and potential future numbers?
Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
There are only three groups of people who could be subject to betrayal here: employees, investors, and customers. Clearly they did not betray employees or investors, since they largely sided with Sam. As for customers, that's harder to gauge -- did people sign up for ChatGPT with the explicit expectation that the research would be "open"?
The founding charter said one thing, but the majority of the company and investors went in a different direction. That's not a betrayal, but a pivot.
That's incorrect. The new members will be chosen by D'Angelo and the two new independent board members. Both of which D'Angelo had a big hand in choosing.
I'm not saying Larry Summers etc going to be in D'Angelo's pocket. But the whole reason he agreed to those picks is because he knows they won't be in Sam's pocket, either. More likely they will act independently and choose future members that they sincerely believe will be the best picks for the nonprofit.
They can't control the CEO, neither fire him.
They can't take actions to take back the back control from Microsoft and Sam because Sam is the CEO. Even if Sam is of the utmost morality, he would be crazy to help them back into a strong position after last week.
So it's the Sam & Microsoft show now, only a master schemer can get back some power to the board.
And say what you want about Larry Summers, but he's not going to be either Sam's or even Microsoft's bitch.
And if Sam controlled it it also wouldn’t have.
Very harsh words for some of the highest paid smartest people on the planet. The employees built GPT-4 the most advanced AI on the planet, what did you build? Do you still claim they’re more deficient in critical thinking compared to you.
Everything has been pure speculation. I would curb my judgement if I were you, until we actually know what happened.
To an extent the promise of the non- profit was that they would be safe, expert custodians of AI development driven not primarily by the profit motive, but also by safety and societal considerations. Has this larger group been ‘betrayed’? Perhaps
Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.
They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.
I'm sure has been a lot of critical thinking going on. I would venture a guess that employees decided that Sam's approach is much more favorable for the price of their options than the original mission of the non-profit entity.
The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.
So they bowed.
If the "other side" (board) had put up a SINGLE convincing argument on why Sam had to go maybe the employees would have not supported Sam unequivocally.
But, atleast as an outsider, we heard nothing that suggests board had reasons to remove Sam other than "the vibes were off"
Can you really accuse the employees of groupthink when the other side is so weak?
https://www.theverge.com/2023/11/20/23968988/openai-employee...
How do you know?
> look at how “quickly” everyone got pulled into
Again, how do you know?
The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.
Was there any concrete criticism in the paper that was written by that board member? (Genuinely asking, not a leading question)
Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.
Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.
No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.
It's also not surprising that people who are near the SV culture will think that AGI needs money to get developed, and that money in general is useful for the kind of business they are running. And that it's a business, not a charity.
I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Reminds me of a quote: "A civilization is a heritage of beliefs, customs, and knowledge slowly accumulated in the course of centuries, elements difficult at times to justify by logic, but justifying themselves as paths when they lead somewhere, since they open up for man his inner distance." - Antoine de Saint-Exupery.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
We see most powerful people are in it for the money and power ego trip, and literally nothing else. Pesky morals be damned. Which may be acceptable for some ad business but here stakes are potentially everything and we have no clue what actual % the risk is.
Its to me very similar to all naivety particle scientists expressed in its early days and then reality check of realpolitik and messed up humans in power when bombs were done, used and then hundred thousand more were produced.
So who holds all the data in closed silos? Google and Facebook. We may have already lost the battle on achieving “open and fair” AI paradigm long time ago.
Their actions was the complete opposite of open. Rather than, I don’t know, being open and talking to the CEO to share concerns and change the company, they just threw a tantrum and fired him.
But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.
On the other hand, if they had some serious concerns, serious enough to fire the CEO in such a disgraceful way, I don't understand why they don't stick to their guns, and explain themselves. If you think OpenAI under Sam's leadership is going to destroy humanity, I don't understand how they (e.g. Ilya) reverted their opinions after a day or two.
that ship sailed long ago , no?
But i agree that the company seems less trustworthy now, like it's too CEO-centered
Stupidity is defined by self-harming actions and beliefs, not by low IQ.
You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project
That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.
Not someone I would like to see running the world’s leading AI company
[1] https://www.thenation.com/article/world/harvard-boys-do-russ...
Edit: also https://prospect.org/economy/falling-upward-larry-summers/
https://www.npr.org/sections/money/2022/03/22/1087654279/how...
And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...
And just like Dropbox, in the end, what disruption? GPT will just be a checkbox for products others build. Cool tech, but not a full product.
Of course, I'd love to be proven wrong.
"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."
The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.
A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.
Most are doing the work they love and four people almost destroy it and cannot even explain why they did it. If I were working at the company that did this I would sign, too. And follow through on the threat of leaving if it comes to that.
Engineer working at "INSERT BIG TECH COMPANY" is no guarantee or insight about critical thinking at another one. The control and power over OpenAI was always at Microsoft regardless of board seats and access. Sam was just a lieutenant of an AI division and the engineers were just following the money like a carrot on a stick.
Of course, the engineers don't care about power dynamics until their paper options are at risk. Then it becomes highly psychological and emotional for them and they feel powerless and can only follow the leader to safety.
The BOD (Board of Directors) with Adam D'Angelo (the one who likely instigated this) has shown to have taken unprecedented steps to remove board members and fire the CEO for very illogical and vague reasons. They already made their mark and the damage is already done.
Lets see if these engineers that signed up to this will learn from this theatrical lesson of how not to do governance and run an entire company into the ground with unspecified reasons.
Any other outcome would have split OpenAI quite dramatically and put them back massively.
Big assumption to say 'effectively controlled by Microsoft' when Microsoft might have been quite happy for the other option and for them to poach a lot of staff.
Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.
I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.
Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.
Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.
That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).
So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...
One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?
Or medieval Spain? About as likely... The Soviets weren't even able to get the factory floors clean enough to consistently manufacture the 8086 10 years after it was already outdated.
> maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".
Unfortunately not other system besides capitalism has enabled consistent technological progress for 200+ years. Turns out you need to pool money and resources to achieve things ..
Does OpenAI have by-laws committing itself to being "open" (as in open source or at least their products freely and universally available)? I thought their goals were the complete opposite of that?
Unfortunately, in reality Facebook/Meta seems to be more open than "Open"AI.
To be fair, Fridman grilled Musk on his views today, also in the context of xAI, and he was less clear cut there, talking about the problem that there's actually very little source code, it's mostly about the data.
Also, they will find a hard time joining any other board from now on.
They should have backed up the claims in the letter. They didn’t.
This means they didn’t have how to backup their claims. They didn’t think it through… extremely amateurish behavior.
The OpenAI employees overwhelmingly rejected the groupthink of the Effective Altruism cult.
People underestimate the effects of social pressure, and losing social connections. Ilya voted for Sam's firing, but was quickly socially isolated as a result
That's not to say people didn't genuinely feel committed to Sam or his leadership. Just that they also took into account that the community is relatively small and people remember you and your actions
However, when that one article does come up, and I know the details inside/out , the comments sections are rife with bad assumptions, naïve comments and misinformation.
Do you have a source for this?
Just because they sided with Altman doesn't necessarily mean they are aligned. There could be a lack of information on the employee/investor side.
Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?
These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.
I just renewed by HN subscription to be able to see Season 2!
Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?
Or that said apple pie was essential to their survival.
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.
https://archive.is/20231122033417/https://www.wsj.com/tech/a...
Bottom line he had a lot more power over the board then than he will now.
This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.
https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...
Oracle is going to get into EVs?
You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.
Like I would become immediately suspicious if food packaging had “real food” written on it.
What does that even mean?
In any case, it's not OpenAI, it's Microsoft, and it has a long history of winning and bouncing back.
Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/
we'll all likely never know what truly happened, but it's a shame that the board has lost their last remnant of some diversity and at the moment appears to be composed of rich Western white males... even if they rushed for profit, I'd have more faith in the potential upside what could be a sea change in the World, if those involved reflected more experiences than are currently gathered at that table.
I'd say the lack of a narrative from the board, general incompetence with how it was handled, the employees quitting and the employee letter played their parts too.
But even if it was Microsoft who made this happen: that's what happens when you have a major investor. If you don't want their influence, don't take their money.
1. Did you really think the feds wouldn't be involved?
AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.
2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.
The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public
Which is utterly scary.
You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.
From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...
> This is what happened with Eric Schmidt on Apple’s board
Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.
Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.
There’s no perks to not signing.
Doing AI for ChatGPT just means you know a single model really well.
Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.
It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.
Why was his role as a CEO even challenged?
>It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
Always remember; Google wasn't the first search engine nor iPhone the first smartphone. First-movers bring innovation and trend not market dominance.
OpenAI is now just a tool used by Businesses. And they dont have a good history of benefitting humanity recently.
Now, yes, they definitely are.
IMO OpenAI’s governance is far less trustworthy today than it was yesterday.
Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.
No it's not. Microsoft didn't knew about this till minutes before the press release.
Investors are free to protest decisions against their principles and people are free to move away from their current company.
Or in Arthurian times. Very different values.
I think Sam came out the winner. He gets to pick his board. He gets to narrow his employees. If anything, this sets him up for dictatorship. The only other overseers are the investors. In that case, Microsoft came out holding a leash. No MS, means no Sam, which also means employees have no say.
So it is more like MS > Sam > employees. MS+Sam > rest of investors.
You seem to be equating AI with magic, which it is very much not.
Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.
https://www.wired.com/2014/04/dropbox-rice-controversy/
https://en.wikipedia.org/wiki/Theranos#Management
In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m
“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)
https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...
Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.
If you want to use that definition you might want to also add a criteria for minimum size of the company.
People here used to back up their bold claims with arguments.
Not that it's really in need of additional evidence.
I don't consider this confirmed. Microsoft brought an enormous amount of money and other power to the table, and their role was certainly big, but it is far from clear to me that they held all or most of the power that was wielded.
Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.
> If you want to use that definition you might want to also add a criteria for minimum size of the company.
Your feedback is noted.
Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?
It hasn't disrupted mine in any way. It may do that in the future, but the future isn't here yet.
So the type of employee that would get hired at OpenAi isn't likely to be skilled at critical thinking? That's doubtful. It looks to me like you dislike how things played out, gathered together some mean adjectives and "groupthink", and ended with a pessimistic prediction for their trajectry as punishment. One is left to wonder what OAI's disruptor outlook would be if the outcome of the current situation had been more pleasing.
Not to mention Roko's basilisk /s
I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.
This is not about soap opera, this is about business and a big part is based on trust.
The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.
And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.
Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.
We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.
Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.
What the process did shoe is if you plan to oust a popular CEO with a thriving company, you should actually have a good reason for it. It’s amazing how little thought seemingly went into it for them.
What leads you to make such a definitive statement? To me the process shows that Microsoft has no pull in OpenAI.
When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.
Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.
The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.
Yes, you are right that the board had weak sauce reasoning for the firing (giving two teams the same project!?!).
That said, the other commenter is right that this is the beginning of the end.
One of the interesting things over the past few years watching the development of AI has been that in parallel to the demonstration of the limitations of neural networks has been many demonstrations of the limitations of human thinking and psychology.
Altman just got given a blank check and crowned as king of OpenAI. And whatever opposition he faced internally just lost all its footing.
That's a terrible recipe for long term success.
Whatever the reasons for the firing, this outcome is going to completely screw their long term prospects, as no matter how wonderful a leader someone is, losing the reality check of empowered opposition results in terrible decisions being made unchecked.
He's going to double down on chat interfaces because that's been their unexpected bread and butter up until the point they get lapped by companies with broader product vision, and whatever elements at OpenAI shared that broader vision are going to get steamrolled now that he's been given an unconditional green light until they jump ship over the next 18 months to work elsewhere.
For a company at the forefront of AI it’s actually very, very human.
Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.
"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".
- peer pressure
- group think
- financial motives
- fear of the unknown (Sam being a known quantity)
- etc.
So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.
If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.
The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.
Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.
Giving me a billion $ would be a net benefit to humanity as a whole
I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.
Corporations have no values whatsoever and their statements only mean anything when expressed in terms of a legally binding contract. All corporate value statements should be viewed as nothing more than the kind of self-serving statements that an amoral narcissitic sociopath would make to protect their own interests.
That's not the bar you are arguing against.
You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.
We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.
You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.
I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.
I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert
We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.
They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.
But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.
It's a very influential essay.
In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.
The interim CEO said the board couldn’t even tell him why the old CEO was fired.
Microsoft said the board couldn’t even tell them why the old CEO was fired.
The employees said the board couldn’t explain why the CEO was fired.
When nobody can even begin to understand the board’s actions and they can’t even explain themselves, it’s a recipe for losing confidence. And that’s exactly what happened, from investors to employees.
Its always written by PR people with marketing in mind
Three was the compromise I made with myself.
Frankly these EA & e/acc cults are starting to get on my nerves.
And there’s a difference between, “an explanation would help their credibility” versus “a lack of explanation means they don’t have a good reason.”
Nobody cares, except shareholders.
I suspect incentives play a huge role here. OAI employees are compensated with stock in the for-profit arm of the company. It's obvious that the board's actions put the value of that stock in extreme jeopardy (which, given the corporate structure, is theoretically completely fine! the whole point of the corporate structure is that the nonprofit board has the power to say "yikes, we've developed an unsafe superintelligence, burn down the building and destroy the company now").
I think it's natural for employees to be extremely angry with a board decision that probably cost them >$1M each.
The employees of a tech company banded together to get what they wanted, force a leadership change, evict the leaders they disagreed with, secure the return of the leadership they wanted, and restored the value of their hard-earned equity.
This certainly isn’t a disappointing outcome for the employees! I thought HN would be ecstatic about tech employees banding together to force action in their favor, but the comments here are surprisingly negative.
So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's
If openAI is a huge mono-culture of thinking then they have bigger problems most likely
1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.
2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.
3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.
4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.
I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.
Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.
Voter approval is actually usually much less unanimous, as far as I can tell.
I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.
95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.
So it looks like a VERY normal company.
Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."
Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.
There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.
However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.
It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.
FYIW I think all the big text have powerful plays available.. including keeping powder dry.
No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.
That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.
You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.
Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.
It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.
Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.
Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.
The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?
DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.
On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.
Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.
This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.
Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.
You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.
And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.
It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.
Ironically, it snuffs out diversity among companies at a 40k foot level.
All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.
Participating in that is assimilation.
Not-validated, unsigned letter [1]
>>All companies are monocultures
yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.
yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"
[1] https://wccftech.com/former-openai-employees-allege-deceit-a...
So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
Unless they had something in their “DNA” that allowed them to build enough compute and pay their employees, they were never going to “win” without a mass infusion of cash and only three companies had enough compute and revenue to throw at them and only two companies had relationships with big enterprise and compute - Amazon and Microsoft.
There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.
"Because someone acts differently than I expected, they must lacks of critical thinking."
Are you an insider? If not, have you considered that perhaps OpenAI employees are more informed about the situation than you?
It's absolutely believable that at first he thought the best way to safeguard AI was to get rid of the main advocate for profit-seeking at OpenAI, then when that person "fell upward" into a position where he'd have fewer constraints, to regret that decision.
Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.
Self-disclosure: I work for a megacorp.
Apple and Microsoft even have the strongest financial results in their lifetime.
And if they do prefer it as a for profit company, why would that make them morally bankrupt?
The idea that the marketplace is a meritocracy of some kind where whatever an individual deems as "merit" wins is just proven to be nonsense time and time again.
And while also working for a for-profit company.
It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?
I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.
Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.
Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?
Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.
DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.
Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.
Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.
Do you agree that the following company pairs are competitors?
* FB : TikTok
* TikTok : YT
* YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix....
To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.
It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.
The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.
People concerned about AI safety were probably not going to join in the first place...
Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
You forgot to do Oracle and Tesla.
> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.
https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...
> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.
https://variety.com/2023/biz/news/wga-ratify-contract-end-st...
It's a post-"Don't be evil" world today.
That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.
not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.
It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.
Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.
At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.
This was said loud and clear when Microsoft joined in the first place but there were no takers.
GP didn't speak of betraying people; he spoke of betraying their own statements. That just means doing what you said you wouldn't; it doesn't mean anyone was stabbed in the back.
This is still making the same assumption. Why are you assuming they are acting outside of self-interest?
Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.
He is explicitly saying they don’t compete. And they don’t.
If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.
At this point I suspect you are being deliberately obtuse. Have a good day.
His voting power will get diluted as they add the next six members, but again, all three of them are going to decide who the next members are going to be.
A snippet from the recent Bloomberg article:
>A person close to the negotiations said that several women were suggested as possible interim directors, but parties couldn’t come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, *but deemed to be too close to Altman*, this person said.
Say what else you want about it, this is not going to be a board automatically stacked in Altman's favor.
I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.
[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.
Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.