That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.
Their bank accounts current and potential future numbers?
Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.
They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.
The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.
https://www.theverge.com/2023/11/20/23968988/openai-employee...
How do you know?
> look at how “quickly” everyone got pulled into
Again, how do you know?
The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.
Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.
Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.
No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.
Stupidity is defined by self-harming actions and beliefs, not by low IQ.
You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project
That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.
"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."
The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.
A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.
Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.
Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.
That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).
So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...
Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?
These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.
Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?
Or that said apple pie was essential to their survival.
This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.
https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...
Oracle is going to get into EVs?
You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.
Like I would become immediately suspicious if food packaging had “real food” written on it.
Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/
1. Did you really think the feds wouldn't be involved?
AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.
2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.
The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public
Which is utterly scary.
You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.
From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...
> This is what happened with Eric Schmidt on Apple’s board
Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.
Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.
There’s no perks to not signing.
Doing AI for ChatGPT just means you know a single model really well.
Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.
It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.
Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.
You seem to be equating AI with magic, which it is very much not.
Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.
https://www.wired.com/2014/04/dropbox-rice-controversy/
https://en.wikipedia.org/wiki/Theranos#Management
In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m
“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)
https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...
Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.
If you want to use that definition you might want to also add a criteria for minimum size of the company.
Not that it's really in need of additional evidence.
Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.
> If you want to use that definition you might want to also add a criteria for minimum size of the company.
Your feedback is noted.
Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?
I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.
The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.
And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.
Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.
We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.
Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.
When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.
Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.
The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.
Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.
"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".
- peer pressure
- group think
- financial motives
- fear of the unknown (Sam being a known quantity)
- etc.
So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.
If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.
The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.
Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.
Giving me a billion $ would be a net benefit to humanity as a whole
I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.
That's not the bar you are arguing against.
You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.
We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.
You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.
I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.
I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert
We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.
They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.
But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.
It's a very influential essay.
In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.
Its always written by PR people with marketing in mind
Three was the compromise I made with myself.
Nobody cares, except shareholders.
So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's
If openAI is a huge mono-culture of thinking then they have bigger problems most likely
1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.
2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.
3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.
4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.
I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.
Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.
Voter approval is actually usually much less unanimous, as far as I can tell.
I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.
95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.
So it looks like a VERY normal company.
Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.
The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?
DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.
On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.
Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.
This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.
Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.
You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.
And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.
It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.
Ironically, it snuffs out diversity among companies at a 40k foot level.
All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.
Participating in that is assimilation.
Not-validated, unsigned letter [1]
>>All companies are monocultures
yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.
yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"
[1] https://wccftech.com/former-openai-employees-allege-deceit-a...
So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.
Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.
Self-disclosure: I work for a megacorp.
Apple and Microsoft even have the strongest financial results in their lifetime.
And if they do prefer it as a for profit company, why would that make them morally bankrupt?
And while also working for a for-profit company.
It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?
I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.
Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.
Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?
Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.
DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.
Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.
Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.
Do you agree that the following company pairs are competitors?
* FB : TikTok
* TikTok : YT
* YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix....
To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.
It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.
The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.
People concerned about AI safety were probably not going to join in the first place...
Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
You forgot to do Oracle and Tesla.
> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.
https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...
> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.
https://variety.com/2023/biz/news/wga-ratify-contract-end-st...
It's a post-"Don't be evil" world today.
That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.
not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.
It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.
Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.
At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.
This is still making the same assumption. Why are you assuming they are acting outside of self-interest?
Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.
He is explicitly saying they don’t compete. And they don’t.
If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.
At this point I suspect you are being deliberately obtuse. Have a good day.
I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.
[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.
Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.