zlacker

[parent] [thread] 14 comments
1. smolde+(OP)[view] [source] 2023-11-20 00:03:17
It wasn't sufficient for Google+ or Farmville either, but both Google and Meta have extremely competitive LLMs. If Microsoft commit themselves (which is a big if), they could have a competitive AI research lab. They're a cloud company now though, so it makes sense that they'd align themselves with the most service-oriented business of the lot.
replies(1): >>p1esk+34
2. p1esk+34[view] [source] 2023-11-20 00:26:31
>>smolde+(OP)
both Google and Meta have extremely competitive LLMs

No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.

replies(2): >>smolde+G4 >>fragme+s7
◧◩
3. smolde+G4[view] [source] [discussion] 2023-11-20 00:29:23
>>p1esk+34
GPT-4 is a magnitude larger and not a magnitude better. Even before that, GPT-3 was not a particularly high watermark (compared to T5 and BERT) and GPT-2 was famously so expensive to run that it ran up a 6-figure monthly cloud spend just for inferencing. Lord knows what GPT-4 costs at-scale, but I'm not convinced it's cost-competitive with the alternatives.
replies(1): >>p1esk+F6
◧◩◪
4. p1esk+F6[view] [source] [discussion] 2023-11-20 00:40:54
>>smolde+G4
GPT-4 is an existential threat to Google. Since March 24 of this year, 80% of the time I ask GPT-4 questions I would google before. And Google knows this. They are throwing billions at it but simply cannot catch up.
replies(2): >>smolde+A9 >>ben_w+Ff1
◧◩
5. fragme+s7[view] [source] [discussion] 2023-11-20 00:46:02
>>p1esk+34
Why does ChatGPT-4 say its knowledge cut off date is April 2023?

https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...

replies(1): >>p1esk+Si
◧◩◪◨
6. smolde+A9[view] [source] [discussion] 2023-11-20 01:00:54
>>p1esk+F6
Beating OpenAI in a money-pissing competition is not their priority. I don't use Google or harbor much love for them, but the existence of AI does not detract from the value of advertising. If anything, it funnels more people into it as they're looking to monetize that which is unprofitable. ChatGPT is not YouTube; it doesn't print money.

Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.

replies(1): >>p1esk+Jj
◧◩◪
7. p1esk+Si[view] [source] [discussion] 2023-11-20 02:07:58
>>fragme+s7
There are many versions of GPT-4 model that appeared after the first one. My point is that Google and others still cannot match the quality of the first one, more than a year after it was trained.
replies(1): >>smolde+ww
◧◩◪◨⬒
8. p1esk+Jj[view] [source] [discussion] 2023-11-20 02:14:30
>>smolde+A9
Beating OpenAI in a money-pissing competition is not their priority

I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.

replies(1): >>smolde+5x
◧◩◪◨
9. smolde+ww[view] [source] [discussion] 2023-11-20 04:06:14
>>p1esk+Si
According to the Bard technical paper (page 14), their model beats GPT-4 in several reasoning benchmarks: https://ai.google/static/documents/palm2techreport.pdf
◧◩◪◨⬒⬓
10. smolde+5x[view] [source] [discussion] 2023-11-20 04:12:43
>>p1esk+Jj
For the sake of your wallet, I hope you don't put money on that. Google certainly spends an order of magnitude more than OpenAI because they have been around longer than them, ship their own hardware and maintain their own inferencing library. The amount they spend on training their LLMs is the minority, full-stop.

I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.

replies(1): >>p1esk+XB
◧◩◪◨⬒⬓⬔
11. p1esk+XB[view] [source] [discussion] 2023-11-20 05:00:11
>>smolde+5x
Google's advantage here is so blatantly obvious

Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.

It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.

replies(1): >>smolde+P32
◧◩◪◨
12. ben_w+Ff1[view] [source] [discussion] 2023-11-20 08:47:37
>>p1esk+F6
From a users POV, GPT-4 with search might be, but not alone. There's still a need for live results and citing specific documents. Search doesn't have to mean Google, but it can mean Google.

From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.

LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.

replies(1): >>p1esk+qE4
◧◩◪◨⬒⬓⬔⧯
13. smolde+P32[view] [source] [discussion] 2023-11-20 13:52:01
>>p1esk+XB
If you see "beating GPT-4" as an actual goalpost, then sure. Google doesn't; their output reflects that.
◧◩◪◨⬒
14. p1esk+qE4[view] [source] [discussion] 2023-11-21 02:08:19
>>ben_w+Ff1
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio

Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.

replies(1): >>ben_w+XL9
◧◩◪◨⬒⬓
15. ben_w+XL9[view] [source] [discussion] 2023-11-22 11:06:13
>>p1esk+qE4
I think they'd only be able to improve the SNR if they know how to separate fact from fiction. While I would love to believe they can do that in 1-2 years, I don't see any happy path for that.
[go to top]