zlacker

[parent] [thread] 22 comments
1. nostra+(OP)[view] [source] 2023-11-19 23:25:44
The most logical outcome would be for Microsoft to buy the for-profit OpenAI entity off its non-profit parent for $50B or some other exorbitant sum. They have the money, this would give the non-profit researchers enough play money that they can keep chasing AGI indefinitely, all the employees who joined the for-profit entity chasing a big exit could see their payday, and the new corporate parent could do what they want with the tech, including deeply integrate it within their systems without fear of competing usages.

Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.

replies(1): >>rvnx+i
2. rvnx+i[view] [source] 2023-11-19 23:27:46
>>nostra+(OP)
The new investors entered at a ~90B USD valuation for info.

Microsoft I don't think they need it:

Assuming they have the whole 90B USD to spend: it doesn't really make sense;

they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).

They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).

They can replicate the tech internally without any doubt and without OpenAI.

Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.

replies(2): >>timeon+T >>nostra+x3
◧◩
3. timeon+T[view] [source] [discussion] 2023-11-19 23:32:24
>>rvnx+i
> They can replicate the tech internally without any doubt and without OpenAI.

Would they be also able to keep up with development?

replies(1): >>PostOn+92
◧◩◪
4. PostOn+92[view] [source] [discussion] 2023-11-19 23:39:54
>>timeon+T
Would a 2.75 trillion dollar software company that has been around since the inception of the modern computer be able to keep up?

Probably. If the people running it and the shareholders were committed to keeping up and spending money to do so.

replies(2): >>ben_w+K4 >>p1esk+Ca
◧◩
5. nostra+x3[view] [source] [discussion] 2023-11-19 23:47:59
>>rvnx+i
The reason for a buy-out is to make this all legally "clean".

Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.

Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.

A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.

◧◩◪◨
6. ben_w+K4[view] [source] [discussion] 2023-11-19 23:55:30
>>PostOn+92
That wasn’t sufficient for Bing, Cortana, or Windows Mobile/Windows Phone.
replies(2): >>smolde+36 >>PostOn+tb
◧◩◪◨⬒
7. smolde+36[view] [source] [discussion] 2023-11-20 00:03:17
>>ben_w+K4
It wasn't sufficient for Google+ or Farmville either, but both Google and Meta have extremely competitive LLMs. If Microsoft commit themselves (which is a big if), they could have a competitive AI research lab. They're a cloud company now though, so it makes sense that they'd align themselves with the most service-oriented business of the lot.
replies(1): >>p1esk+6a
◧◩◪◨⬒⬓
8. p1esk+6a[view] [source] [discussion] 2023-11-20 00:26:31
>>smolde+36
both Google and Meta have extremely competitive LLMs

No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.

replies(2): >>smolde+Ja >>fragme+vd
◧◩◪◨
9. p1esk+Ca[view] [source] [discussion] 2023-11-20 00:28:55
>>PostOn+92
The larger the corporation the harder for it to keep up or innovate.
◧◩◪◨⬒⬓⬔
10. smolde+Ja[view] [source] [discussion] 2023-11-20 00:29:23
>>p1esk+6a
GPT-4 is a magnitude larger and not a magnitude better. Even before that, GPT-3 was not a particularly high watermark (compared to T5 and BERT) and GPT-2 was famously so expensive to run that it ran up a 6-figure monthly cloud spend just for inferencing. Lord knows what GPT-4 costs at-scale, but I'm not convinced it's cost-competitive with the alternatives.
replies(1): >>p1esk+Ic
◧◩◪◨⬒
11. PostOn+tb[view] [source] [discussion] 2023-11-20 00:33:52
>>ben_w+K4
I personally believe these are marketing failures rather than technical failures.

I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.

◧◩◪◨⬒⬓⬔⧯
12. p1esk+Ic[view] [source] [discussion] 2023-11-20 00:40:54
>>smolde+Ja
GPT-4 is an existential threat to Google. Since March 24 of this year, 80% of the time I ask GPT-4 questions I would google before. And Google knows this. They are throwing billions at it but simply cannot catch up.
replies(2): >>smolde+Df >>ben_w+Il1
◧◩◪◨⬒⬓⬔
13. fragme+vd[view] [source] [discussion] 2023-11-20 00:46:02
>>p1esk+6a
Why does ChatGPT-4 say its knowledge cut off date is April 2023?

https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...

replies(1): >>p1esk+Vo
◧◩◪◨⬒⬓⬔⧯▣
14. smolde+Df[view] [source] [discussion] 2023-11-20 01:00:54
>>p1esk+Ic
Beating OpenAI in a money-pissing competition is not their priority. I don't use Google or harbor much love for them, but the existence of AI does not detract from the value of advertising. If anything, it funnels more people into it as they're looking to monetize that which is unprofitable. ChatGPT is not YouTube; it doesn't print money.

Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.

replies(1): >>p1esk+Mp
◧◩◪◨⬒⬓⬔⧯
15. p1esk+Vo[view] [source] [discussion] 2023-11-20 02:07:58
>>fragme+vd
There are many versions of GPT-4 model that appeared after the first one. My point is that Google and others still cannot match the quality of the first one, more than a year after it was trained.
replies(1): >>smolde+zC
◧◩◪◨⬒⬓⬔⧯▣▦
16. p1esk+Mp[view] [source] [discussion] 2023-11-20 02:14:30
>>smolde+Df
Beating OpenAI in a money-pissing competition is not their priority

I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.

replies(1): >>smolde+8D
◧◩◪◨⬒⬓⬔⧯▣
17. smolde+zC[view] [source] [discussion] 2023-11-20 04:06:14
>>p1esk+Vo
According to the Bard technical paper (page 14), their model beats GPT-4 in several reasoning benchmarks: https://ai.google/static/documents/palm2techreport.pdf
◧◩◪◨⬒⬓⬔⧯▣▦▧
18. smolde+8D[view] [source] [discussion] 2023-11-20 04:12:43
>>p1esk+Mp
For the sake of your wallet, I hope you don't put money on that. Google certainly spends an order of magnitude more than OpenAI because they have been around longer than them, ship their own hardware and maintain their own inferencing library. The amount they spend on training their LLMs is the minority, full-stop.

I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.

replies(1): >>p1esk+0I
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
19. p1esk+0I[view] [source] [discussion] 2023-11-20 05:00:11
>>smolde+8D
Google's advantage here is so blatantly obvious

Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.

It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.

replies(1): >>smolde+S92
◧◩◪◨⬒⬓⬔⧯▣
20. ben_w+Il1[view] [source] [discussion] 2023-11-20 08:47:37
>>p1esk+Ic
From a users POV, GPT-4 with search might be, but not alone. There's still a need for live results and citing specific documents. Search doesn't have to mean Google, but it can mean Google.

From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.

LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.

replies(1): >>p1esk+tK4
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
21. smolde+S92[view] [source] [discussion] 2023-11-20 13:52:01
>>p1esk+0I
If you see "beating GPT-4" as an actual goalpost, then sure. Google doesn't; their output reflects that.
◧◩◪◨⬒⬓⬔⧯▣▦
22. p1esk+tK4[view] [source] [discussion] 2023-11-21 02:08:19
>>ben_w+Il1
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio

Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.

replies(1): >>ben_w+0S9
◧◩◪◨⬒⬓⬔⧯▣▦▧
23. ben_w+0S9[view] [source] [discussion] 2023-11-22 11:06:13
>>p1esk+tK4
I think they'd only be able to improve the SNR if they know how to separate fact from fiction. While I would love to believe they can do that in 1-2 years, I don't see any happy path for that.
[go to top]