But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?
I will bite. How do you know they didn't?
Why didn’t they hire a competent builder?
You:
>how do you know they weren’t? It could be pure happenstance! All the nails could… could have been defective! Or something! waves hands
IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.
Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.
I would definitely say the board screwed up.
https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...
One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?
>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:
The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.
>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
From Forbes [1]
Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...
Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.
This isn't a university department. You fuck around with $100B+ dollars of other people's money, you're gonna be in for it.
I'd imagine the latter, and that it can be easily yanked away.
Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."
Isn't this true for most of S.V.?
If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.
And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.
Nothing stopping a non-profit from owning all the shares in a for-profit.
[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...