https://twitter.com/sama/status/1727206691262099616 (+ follow-up https://twitter.com/sama/status/1727207458324848883)
https://twitter.com/gdb/status/1727206609477411261
https://twitter.com/miramurati/status/1727206862150672843
UPD https://twitter.com/gdb/status/1727208843137179915
We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners.
It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.
I was a bit alarmed by the allegations in this article
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.
Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126
https://en.m.wikipedia.org/wiki/Bret_Taylor
> On November, 21st, 2023, Bret Taylor replaced Greg Brockman as the chairman of OpenAI.
...with three footmark "sources" that all point to completely unrelated articles about Bret from 2021-2022.
I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.
Exactly. This is seriously improper and dangerous.
It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.
I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...
> And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.
https://nitter.net/satyanadella/status/1726509045803336122
I guess everyone was just playing a bit loose and fast with the truth and hype to pressure the board.
https://twitter.com/teddyschleifer/status/172721237871736880...
Former Secretary, SalesForce CEO who was board chair of Twitter when infiltrated with FBI [1] and the fall-guy for the coup is the new board? Not one person from the actual company - not even Greg who did nothing wrong??? [1] - https://twitter.com/NameRedacted247/status/16340211499976867...
The two think-tank women who made all this happen conveniently leave so we never talk about them again.
Whatever, as long as I can use their API.
Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].
That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.
[1] https://cset.georgetown.edu/publication/decoding-intentions/
Altman and Toner came into conflict over a mildly critical paper Toner wrote involving Open AI and Altman tried to have her removed from the board.
This is probably what precipitated this showdown. The pro safety/nonprofit charter faction was able to persuade someone (probably Ilya) to join with them and oust Sam.
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Isn’t this true though? Says more about Harvard than Summers to be honest.
https://www.swarthmore.edu/bulletin/archive/wp/january-2009_...
Finally the OpenAI saga ends and everybody can go back to building!
3 things that turned things around imo:
1. 95% of employees signing the letter
2. Ilya and Mira turning Team Sam
3. Microsoft pulling credits
Things AREN’T back to where they were. OpenAI has been through hell and back. This team is going to ship like we’ve never seen before.https://twitter.com/sama/status/1727207458324848883
He's has now changed his mind, sure, but that doesn't mean Satya lied.
> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...
See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.
The fact that you think current inflation has anything to do with that stimulus bill back then shows how little you understand about any of this.
Larry Summers is the worst kind of person. Somebody who is nothing but a corporate stooge trying to act like the adult by being "reasonable", when that just means enriching his corporate friends, letting people suffer and not spending money (which any study will tell you is not the correct approach to situations like this because of multiplying effects they have down the line).
Some necessary reading:
In regards to watering it down to get GOP votes: https://archive.nytimes.com/krugman.blogs.nytimes.com/2009/0...
By all accounts he paid about double what it was worth and the value has collapsed from there.
Probably not a great idea to say anything overtly political when you own a social media company, as due to politics being so polarised in the US, any opinion is going to divide your audience in half causing a usage collapse and driving support to competing platforms.
https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...
https://news.ycombinator.com/edit?id=38375767
It will be super interesting to see the subtle struggles for influence between these three.
The biggest sticking point was Sam being on the board. Ultimately, he conceded to not being on the board, at least initially, to close the deal. The hope/expectation is that he will end up on the board eventually."
(https://twitter.com/emilychangtv/status/1727216818648134101)
The staff never mutinied. They threatened to mutiny. That's a big difference!
Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.
[1] >>38348123
Decades of research shows that teachers give girls better grades than boys of the same ability. This is not some new revelation.
https://www.forbes.com/sites/nickmorrison/2022/10/17/teacher...
https://www.bbc.co.uk/news/education-31751672
A whole cohort of boys got screwed over by the cancellation of exams during Covid. That is just reality, and no amount of creepy male feminist posturing is going to change that. Rather, denying issues in boys education is liable to increase male resentment and bitterness, something we've already witnessed over the past few years.
above that in the charter is "Broadly distributed benefits", with details like:
"""
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
"""
In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.
Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.
Here is a meta analysis on the subject: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057475/
Thrive was about to buy employee shares at a $86 bn valuation. The Information said that those units had 12x since 2021.
https://www.theinformation.com/articles/thrive-capital-to-le...
https://www.hollywoodreporter.com/business/business-news/sar...
* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.
As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?
What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.
* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:
https://openai.com/blog/introducing-superalignment
But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)
They have. At length. E.g.,
https://ai100.stanford.edu/gathering-strength-gathering-stor...
https://arxiv.org/pdf/2307.03718.pdf
https://eber.uek.krakow.pl/index.php/eber/article/view/2113
https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...
https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...
For just a handful of examples from the vast literature published in this area.
This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.
The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?
An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?
.
Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: >>38376263
If Summers had in fact limited himself to the statistical claims, it would have been less of an issue. He would still have been wrong, but he wouldn't have been so obviously sexist.
It's easy to refute Summers' claims, and in fact conclude that the complete opposite of what he was saying is more likely true. "Gender, Culture, and mathematics performance"(https://www.pnas.org/doi/10.1073/pnas.0901265106) gives several examples that show that the variability as well as male-dominance that Summers described is not present in all cultures, even within the US - for example, among Asian American students in Minnesota state assessments, "more girls than boys scored above the 99th percentile." Clearly, this isn't an issue of "intrinsic aptitude" as Summers claimed.
> A whole cohort of boys got screwed over by the cancellation of exams during Covid.
I'm glad we've identified the issue that triggered you. But your grievances on that matter are utterly irrelevant to what I wrote.
> no amount of creepy male feminist posturing is going to change that
It's always revealing when someone arguing against bigotry is accused of "posturing". You apparently can't imagine that someone might not share your prejudices, and so the only explanation must be that they're "posturing".
> increase male resentment and bitterness
That's a choice you've apparently personally made. I'd recommend taking more responsibility for your own life.
In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].
We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.
[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...
> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat
Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.
There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.
[0] https://cset.georgetown.edu/publication/decoding-intentions/
https://openai.com/blog/introducing-superalignment
I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.
From a 2016 New Yorker article:
> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)
The reputation boost is probably worth a lot more than the direct financial compensation he's getting.
1. Censorship of information
2. Cover-up of the biases and injustices in our society
This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.
Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.
Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:
1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits. 2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues. 3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes. 4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories. 5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology. 6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society. 7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.
Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.
Bonhoeffer's theory of stupidity: https://www.youtube.com/watch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...
https://www.theverge.com/2023/11/20/23968988/openai-employee...
Blue tick just means user bought a subscription (X Premium) now - one of the features is "reply prioritization", so top replies to popular tweets are from blue ticks.
Not someone I would like to see running the world’s leading AI company
[1] https://www.thenation.com/article/world/harvard-boys-do-russ...
Edit: also https://prospect.org/economy/falling-upward-larry-summers/
https://www.npr.org/sections/money/2022/03/22/1087654279/how...
And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...
They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:
> One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman was said to have given two board members different opinions about a member of personnel.
Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.
If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.
[0] https://www.businessinsider.com/openais-employees-given-expl...
One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?
"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"
Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.
One thing IS clear at this point - their political alignment:
* Taylor a significant donor to Joe Biden ($713,637 in 2020): https://nypost.com/2022/04/26/twitter-board-members-gave-tho...
* Summers is a former Democrat Treasury Secretary who has shifted leftwards with age: https://www.newstatesman.com/the-weekend-interview/2023/03/w...
https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.
https://archive.is/20231122033417/https://www.wsj.com/tech/a...
Bottom line he had a lot more power over the board then than he will now.
This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.
https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...
Oracle is going to get into EVs?
You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.
Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...
> This is what happened with Eric Schmidt on Apple’s board
Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.
He was instrumental; threatened resignation unless the old board could provide evidence of wrongdoing
Corresponding Princess Bride scene: https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7
Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.
https://www.wired.com/2014/04/dropbox-rice-controversy/
https://en.wikipedia.org/wiki/Theranos#Management
In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m
“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)
https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...
Seriously, Businesses simply dont have the history that governments do. They're just as capable of violence.
https://utopia.org/guide/crime-controversy-nestles-5-biggest...
All you're identifying is "government has a longer history of violence than Businesses"
- peer pressure
- group think
- financial motives
- fear of the unknown (Sam being a known quantity)
- etc.
So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.
If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.
But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.
It's a very influential essay.
I wouldn't be so sure. While I think the board handled this process terribly, I think the majority of mainstream media articles I saw were very cautionary regarding the outcome. Examples (and note the second article reports that Paul Graham fired Altman from YC, which I never knew before):
MarketWatch: https://www.marketwatch.com/story/the-openai-debacle-shows-s...
Washington Post: https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
https://twitter.com/coloradotravis/status/172606030573668790...
A good leader is someone you'll follow into battle, because you want to do right by the team, and you know the leader and the team will do right by you. Whatever 'leadership' is, Sam Altman has it and the board does not.
https://www.ft.com/content/05b80ba4-fcc3-4f39-a0c3-97b025418...
The board could have said, hey we don't like this direction and you are not keeping us in the loop, it's time for an orderly change. But they knew that wouldn't go well for them either. They chose to accuse Sam of malfeasance and be weaselly ratfuckers on some level themselves, even if they felt for still-inscrutable reasons that was their only/best choice and wouldn't go down the way it did.
Sam Altman is the front man who 'gave us' ChatGPT regardless of everything else Ilya and everyone else did. A personal brand (or corporate) is about trust, if you have a brand you are playing a long-term game, a reputation converts prisoner's dilemma into iterated prisoner's dilemma which has a different outcome.
Not-validated, unsigned letter [1]
>>All companies are monocultures
yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.
yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"
[1] https://wccftech.com/former-openai-employees-allege-deceit-a...
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
From https://openai.com/our-structure
- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]
> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.
[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:
https://www.statista.com/statistics/1219257/us-employment-ra...
> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:
"We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_CokeFirst, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.
But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.
I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.
Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???
There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.
Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."
https://www.irs.gov/charities-non-profits/charitable-organiz...
https://www.thecrimson.com/article/2023/5/5/epstein-summers-...
I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed
[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Toner got her board seat because she was basically Holden Karnofsky's designated replacement:
> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.
> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".
There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)
1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...
> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.
https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...
> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.
https://variety.com/2023/biz/news/wga-ratify-contract-end-st...
https://news.ycombinator.com/item?id=38375239&p=2
No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it. https://bower.sh/in-love-with-a-ghost
https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...
Seeing a bug in your comment here:
You reference the pages like this:
https://news.ycombinator.com/item?id=38375239?p=2
The second ? should be an & like this:
https://news.ycombinator.com/item?id=38375239&p=2
Please feel free to delete this message after you've received it.
> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.
"Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past" [1]
HN plans to be multi-core?!?! A bigger scoop than OpenAI governance!
Anything more you can share?
[1] >>38351005
Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981
After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.
Note that the response is Altman's, and he seems to support it.
As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.
https://en.wikipedia.org/wiki/United_States_government_role_...
If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.
And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.
So they allied with Helen to countercoup Greg/Sam.
I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.
https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation
There are other similar examples like Ikea.
But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
Explicit planning with discrete knowledge is GOFAI and I think isn't workable.
There is whatever's going on here: https://x.com/natolambert/status/1727476436838265324?s=46
Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...