Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
A good example of how just having your foot in the door creates serendipitous opportunity in life.
There is also all the questions for RLHF, and the pipelines to think around that.
To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.
To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.
Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.
Sounds like Altman's biography.
Their development and QA process is either disorganized to the extreme, or non-existent.
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
I'm wondering why that option hasn't been used yet.
If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.
Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
If you’re making like 250k cash and were promised $1M a year in now-worthless paper, plus you have OpenAI on the resume, are one of the most in-demand people in the world? It would be rediculously easy to quit.
Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.
Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.
Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.
If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.
Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.
MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.
Everything else won't benefit competitors.
[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
Personally I've got enough IOU's alive that I may be rich one day. But if someone gave me retirement in 4 years money, guaranteed, I wouldn't even blink before taking it.
*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.
It seems obvious Microsoft has a license to use them in Microsoft's own products. Microsoft said so directly on Friday.
What is less obvious is if Microsoft has a license to use them in other ways. For example, can Microsoft provide those weights and code to third parties? Can they let others use them? In particular, can they clone the OpenAI API? I can see reasons for why that would not have been in the deal (it would risk a major revenue source for OpenAI) but also reasons why Microsoft might have insisted on it (because of situations just like the one happening now).
What is actually in the deal is not public as far as I know, so we can only speculate.
Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.
My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )
Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.
In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.
The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.
Rosko's Basilisk is a sci-fi hypothetical.
Altman's Basilisk, if that's what happened, is a panic reaction.
I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.
It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.
If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)
If tomorrow it's Donald Trump or Sam Altman or anyone else, and it works out, the investors are going to be happy.
If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.
And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
They bought their IP rights from OpenAI.
I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.
I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no. I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.
Now up to 600+/770 total.
Couple janitors. I dunno who hasn't signed that at this point ha...
Would be fun to see a counter letter explaining their thinking to not sign on.
Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.
The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.
Also all these cats arn't petty. They are friends. I'm sure Ilya feels terrible. Satya is a pro... Won't be hard feelings.
The guy threw in with the board... He's not from startup land. His last gig was Google. He's way over his head relative to someone like Altman who was in this world the moment out of college diapers.
Poor Ilya... It's awful to build something and then accidentally destroy it. Hopefully it works out for him. I'm fairly certain he and Altman and Brockman have already reconciled during the board negotiations... Obviously Ilya realized in the span of 48hrs that he'd made a huge mistake.
Couldthe 13b could be considerably less cost
I don't see a trajectory to "head of Microsoft Research".
MSFT looks classy af.
Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.
They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.
The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.
was
There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).
Finally the paperclip maximizer
Good luck trying to find H100 80s on the 3 big clouds.
Which is a phenomenal deal for MSFT.
Time will tell whether they ever reach more than $1.3 in profits.
So no, we’re nowhere near max capability.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
Why would anyone in their right mind invite such a man to lead a commercial research team, when he's demonstrated quite clearly that he'd spend all his time trying to sabotage it?
This idea that he's one of the world's best researchers is also somewhat questionable. Nobody cared much about OpenAI's work up until they did some excellent scaling engineering, partnered with Microsoft to get GPUs and then commercialized Google's transformer research papers. OpenAI's success is still largely built on the back of excellent execution of other people's ideas more than any unique breakthroughs. The main advance they made beyond Google's work was InstructGPT which let you talk to LLMs naturally for the first time, but Sutskever's name doesn't appear on that paper.
(I'm in the latter camp).
No enterprise employee gets fired for using Microsoft.
It is a power play to pull enterprises away from AWS, and suffocating GCP.
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
I wouldn’t count on that if Microsoft’s legal team does a review of the training data.
It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.
Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.
So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?
Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B
Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?
I've definitely come out worse on some of the screw ups in my life.
The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.
The hubris, indeed.
Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.
Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
That team had set state of the art for years now.
Every major firm that has a spot for that company's chief researcher and can afford him would bid.
This is the team that actually shipped and continues to ship. You take him every time if you possibly have room and he would be happy.
Anyone whose hired would agree in 99 percent of cases, some limited scenarios such as bad predicted team fit ect set aside.
https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...
Even they prob had some friend come flying over and jump out of some autonomous car to knock on their door in sf.
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" >>38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
I grew up poor in the 90s and had my own computer around ~10yrs old. It was DOS but I still learned a lot. Eventually my brother and I saved up from working at a diner washing dishes and we built our own Windows PC.
I didn't go to college but I taught myself programming during a summer after high school and found a job within a year (I already knew HTML/CSS from high school).
There's always ways. But I do agree partially, YC/VCs do have a bias towards kids from high end schools and connected families.
Including their head researcher.
I'm not continuing this. Your position is about as tenable as the boards. Equally rigid as well.
I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.
Wouldn't that be a nicer reality?
I mean, unless you were rooting for the malevolent one...
But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?
Ah, OpenAI is closed source stuff. Non-profit, but "we will sell the company" later. Just let us collect data, analyse it first, build a product.
War is peace, freedom is slavery.
at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.
whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?
the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)
https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...
somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
Quora data likely made a huge difference in the quality of those GPT responses.
My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.
People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.
I presume their deal is something different to the typically Azure experience and more direct / close to the metal.
But they probably allowed this to get derailed far too long ago to do anything about it now.
Sounds like their only options are:
a) Structure in a way Microsoft likes and give them the tech
b) Give Microsoft the tech in a different way
c) Disband the company, throw away the tech, and let Microsoft hire everybody who created the tech so they can recreate it.
Especially since you have to explain how "just mimicking" works so well.
Sounds a bit low for these people, unless I am misunderstanding.
75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.
Altman reminds me of Sam Bankman-Fried but dropping out.
The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.
> Incredible, really. The hubris.
I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.
If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:
This is AAA talent. They can always land elsewhere.
I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.
This is not a petty team. You should look more closely at their culture.
I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.
Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...
Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
https://twitter.com/karaswisher/status/1726782065272553835
Caveat: I don't know anything.
https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...
Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
Same as GPS voices I guess.
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.
Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.
Why does he need to do that? He doesn't need to make any such public statement!
Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.
As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.
Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.
But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.
Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.
> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.
I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.
Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?