zlacker

[parent] [thread] 9 comments
1. smolde+(OP)[view] [source] 2023-11-20 00:29:23
GPT-4 is a magnitude larger and not a magnitude better. Even before that, GPT-3 was not a particularly high watermark (compared to T5 and BERT) and GPT-2 was famously so expensive to run that it ran up a 6-figure monthly cloud spend just for inferencing. Lord knows what GPT-4 costs at-scale, but I'm not convinced it's cost-competitive with the alternatives.
replies(1): >>p1esk+Z1
2. p1esk+Z1[view] [source] 2023-11-20 00:40:54
>>smolde+(OP)
GPT-4 is an existential threat to Google. Since March 24 of this year, 80% of the time I ask GPT-4 questions I would google before. And Google knows this. They are throwing billions at it but simply cannot catch up.
replies(2): >>smolde+U4 >>ben_w+Za1
◧◩
3. smolde+U4[view] [source] [discussion] 2023-11-20 01:00:54
>>p1esk+Z1
Beating OpenAI in a money-pissing competition is not their priority. I don't use Google or harbor much love for them, but the existence of AI does not detract from the value of advertising. If anything, it funnels more people into it as they're looking to monetize that which is unprofitable. ChatGPT is not YouTube; it doesn't print money.

Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.

replies(1): >>p1esk+3f
◧◩◪
4. p1esk+3f[view] [source] [discussion] 2023-11-20 02:14:30
>>smolde+U4
Beating OpenAI in a money-pissing competition is not their priority

I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.

replies(1): >>smolde+ps
◧◩◪◨
5. smolde+ps[view] [source] [discussion] 2023-11-20 04:12:43
>>p1esk+3f
For the sake of your wallet, I hope you don't put money on that. Google certainly spends an order of magnitude more than OpenAI because they have been around longer than them, ship their own hardware and maintain their own inferencing library. The amount they spend on training their LLMs is the minority, full-stop.

I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.

replies(1): >>p1esk+hx
◧◩◪◨⬒
6. p1esk+hx[view] [source] [discussion] 2023-11-20 05:00:11
>>smolde+ps
Google's advantage here is so blatantly obvious

Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.

It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.

replies(1): >>smolde+9Z1
◧◩
7. ben_w+Za1[view] [source] [discussion] 2023-11-20 08:47:37
>>p1esk+Z1
From a users POV, GPT-4 with search might be, but not alone. There's still a need for live results and citing specific documents. Search doesn't have to mean Google, but it can mean Google.

From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.

LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.

replies(1): >>p1esk+Kz4
◧◩◪◨⬒⬓
8. smolde+9Z1[view] [source] [discussion] 2023-11-20 13:52:01
>>p1esk+hx
If you see "beating GPT-4" as an actual goalpost, then sure. Google doesn't; their output reflects that.
◧◩◪
9. p1esk+Kz4[view] [source] [discussion] 2023-11-21 02:08:19
>>ben_w+Za1
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio

Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.

replies(1): >>ben_w+hH9
◧◩◪◨
10. ben_w+hH9[view] [source] [discussion] 2023-11-22 11:06:13
>>p1esk+Kz4
I think they'd only be able to improve the SNR if they know how to separate fact from fiction. While I would love to believe they can do that in 1-2 years, I don't see any happy path for that.
[go to top]