zlacker

[parent] [thread] 7 comments
1. dougab+(OP)[view] [source] 2022-05-23 23:27:43
This characterization is not really accurate. OpenAI has had almost a 2 year lead with GPT-3 dominating the discussion of LLMs (large language models). Google didn’t release its paper on the powerful PaLM-540b model until recently. Similarly, CLiP, Glide, DALL-E, and DALL-E2 have been incredibly influential in visual-language models. Imagen, while highly impressive, definitely is a catch-up piece of work (as was PaLM-540b).

Google clearly demonstrates their unrivaled capability to leverage massive quantities of data and compute, but it’s premature to declare that they’ve secured victory in the AI Wars.

replies(1): >>benree+p
2. benree+p[view] [source] 2022-05-23 23:31:27
>>dougab+(OP)
I agree that it’s still a jump ball in a rapidly moving field, I was saying Google is far ahead, not that they’ve won.

And I don’t think whatever iteration of PaLM was cooking at the time GPT-3 started getting press would have looked to shabby.

I think Google crushed OpenAI on both GPT and DALL-E in short order because OpenAI published twice and someone had had enough.

replies(2): >>dougab+L1 >>alphab+a2
◧◩
3. dougab+L1[view] [source] [discussion] 2022-05-23 23:42:14
>>benree+p
That’s pretty speculative and dubious (the holding back part) given the heavy bias to publication culture at Google Research and DeepMind. OpenAI has hardly been “crushed” here; PaLM and Imagen are solid, incremental advances, but given what came before them, not Earth-shattering.

If I were going to cite evidence for Alphabet’s “supremacy” in AI, I would’ve picked something more novel and surprising such as AlphaFold, or perhaps even Gato.

It’s not clear to me that Google has anything which compares to Reality Labs, although this may simply be my own ignorance.

Nvidia surely scooped Google with Instant Neural Graphics Primitives, in spite of Google publishing dozens of (often very interesting) NeRF papers. It’s not a war, all these works build on one another.

replies(1): >>benree+K4
◧◩
4. alphab+a2[view] [source] [discussion] 2022-05-23 23:46:46
>>benree+p
OpenAI and FAIR are definitely in the same league as Google but Google has been all-in on AI from the beginning. They've probably spent well over $100B on AI research. I really enjoyed the Genius Makers book which came out last year from an NYT reporter on history of ML race. Deepmind apparently turned down a FB offer of double what Google was offering.
replies(1): >>benree+x2
◧◩◪
5. benree+x2[view] [source] [discussion] 2022-05-23 23:50:17
>>alphab+a2
Cade Metz is that author and most of it I can only speculate on.

The bits and pieces I saw first hand tie out reasonably well with that account.

◧◩◪
6. benree+K4[view] [source] [discussion] 2022-05-24 00:07:22
>>dougab+L1
I want to be clear, all of this stuff is fascinating, expensive, and difficult. With the possible exception of a few trailer-park weirdos like me, it basically takes a PhD to even stay on top of the field, and you clearly know your stuff.

And to be equally clear, I have no inside baseball on how Brain/DM choose when to publish. I have some watercooler chat on the friendly but serious rivalry between those groups, but that’s about it.

I’m looking from the outside in at OpenAI getting all the press and attention, which sounds superficial but sooner or later turns into actual hires of actual star-bound post docs, and Google laying a little low for a few years.

Then we get Gato, Imagen, and PaLM in the space of like what, 2 months?

Clearly I’m speculating that someone pulled the trigger, but I don’t think it’s like, absurd.

replies(1): >>dougab+27
◧◩◪◨
7. dougab+27[view] [source] [discussion] 2022-05-24 00:25:50
>>benree+K4
Scaling up improved versions of existing recipes can be done surprisingly fast if you have strong DL infrastructure. Also, GPT-3 was built on top of previous advances such as Google’s BERT. I’m surprised that it took Google so long to answer w/ PaLM, though it seems plausible to me that they wanted a clear enough qualitative advancement that people didn’t immediate say, “So what.”

You could’ve had the same reaction years ago when Google published GoogleNet followed by a series of increasingly powerful Inception models - namely that Google would wind up owning the DNN space. But it didn’t play out that way, perhaps because Google dragged its feet releasing the models and training code, and by the time it did, there were simpler and more powerful models available like ResNet.

Meta’s recent release of the actual OPT LLM weights is probably going to have more impact than PaLM, unless Google can be persuaded to open up that model.

replies(1): >>benree+oa
◧◩◪◨⬒
8. benree+oa[view] [source] [discussion] 2022-05-24 00:56:18
>>dougab+27
There are a lot of really knowledgeable people on here, but this field is near and dear to my heart and it’s obvious that you know it well.

I don’t know what “we should grab a coffee or a beer sometime” means in the hyper-global post-C19 era, but I’d love to speak more on this without dragging a whole HN comment thread through it.

Drop me a line if you’re inclined: ben.reesman at gmail

[go to top]