[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...
https://www.youtube.com/watch?v=Gpc5_3B5xdk
The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
what the actual fuck =O
> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.
So, 3/4 of the current board (excluding Ilya) held on despite this letter?
[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...
https://www.levels.fyi/blog/openai-compensation.html
https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
> Never attribute to malice that which is adequately explained by stupidity.
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.
frik on April 25, 2014:
> The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.
> * Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia
> * Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...
For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.
1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
[1]: >>38330819
[2]: https://nitter.net/jeremyphoward/status/1725712220955586899
Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.
Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.
https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:
The business must be owned by the private foundation through 100 percent ownership of the voting stock.
The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
All profits of the business must be distributed to the foundation.Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.
https://twitter.com/karaswisher/status/1726599700961521762?s...
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.
[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.
[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...
EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...
https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm
Substitute with appropriate ex-Soviet doomer music as necessary
https://en.wikipedia.org/wiki/501(c)_organization
"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"
However, many other forms of organizations can be non-profit, with utterly no implied morality.
Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].
Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...
https://docs.google.com/document/d/1SWnabqe1PviVE3K7KIZsN4IA...
> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.
Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]
Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.
Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]
Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.
It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.
[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...
Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)
That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.
These sort of Silicon Valley top dogs don't think in terms of sustainability.
Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.
[1] >>38312294
> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.
https://www.openphilanthropy.org/grants/openai-general-suppo...
[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...
That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."
https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...
https://en.wikipedia.org/wiki/German_nuclear_weapons_program
https://txtify.it/https://www.nytimes.com/2023/11/18/technol...
NYT article about how AI safety concerns played into this debacle.
The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.
Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:
https://twitter.com/thiagovscoelho/status/172650681847663424...
And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.
Nevertheless I agree with you and think (2) is wise to always keep in mind. I love Hanlon's Razor but people definitely should take it literally as written and/or as law.
https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...
Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:
Preamble on his general pro-tech stance:
"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."
On why AI would be dangerous to humanity:
"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."
On his path to safe AI:
"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."
On why skeptics need to stop side-stepping the debate:
"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.
"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" >>38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...
https://www.washingtonpost.com/technology/2023/11/20/microso...
Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
https://twitter.com/karaswisher/status/1726782065272553835
Caveat: I don't know anything.
https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...
Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.