Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.
Microsoft I don't think they need it:
Assuming they have the whole 90B USD to spend: it doesn't really make sense;
they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).
They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).
They can replicate the tech internally without any doubt and without OpenAI.
Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.
Would they be also able to keep up with development?
Probably. If the people running it and the shareholders were committed to keeping up and spending money to do so.
Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.
Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.
A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.
No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.
I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.
https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...
Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.
I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.
I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.
Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.
It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.
From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.
Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.