zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. benree+al[view] [source] 2022-05-23 22:55:39
>>kevema+(OP)
I apologize in advance for the elitist-sounding tone. In my defense the people I’m calling elite I have nothing to do with, I’m certainly not talking about myself.

Without a fairly deep grounding in this stuff it’s hard to appreciate how far ahead Brain and DM are.

Neither OpenAI nor FAIR ever has the top score on anything unless Google delays publication. And short of FAIR? D2 lacrosse. There are exceptions to such a brash generalization, NVIDIA’s group comes to mind, but it’s a very good rule of thumb. Or your whole face the next time you are tempted to doze behind the wheel of a Tesla.

There are two big reasons for this:

- the talent wants to work with the other talent, and through a combination of foresight and deep pockets Google got that exponent on their side right around the time NVIDIA cards started breaking ImageNet. Winning the Hinton bidding war clinched it.

- the current approach of “how many Falcon Heavy launches worth of TPU can I throw at the same basic masked attention with residual feedback and a cute Fourier coloring” inherently favors deep pockets, and obviously MSFT, sorry OpenAI has that, but deep pockets also non-linearly scale outcomes when you’ve got in-house hardware for multiply-mixed precision.

Now clearly we’re nowhere close to Maxwell’s Demon on this stuff, and sooner or later some bright spark is going to break the logjam of needing 10-100MM in compute to squeeze a few points out of a language benchmark. But the incentives are weird here: who, exactly, does it serve for us plebs to be able to train these things from scratch?

◧◩
2. dougab+bp[view] [source] 2022-05-23 23:27:43
>>benree+al
This characterization is not really accurate. OpenAI has had almost a 2 year lead with GPT-3 dominating the discussion of LLMs (large language models). Google didn’t release its paper on the powerful PaLM-540b model until recently. Similarly, CLiP, Glide, DALL-E, and DALL-E2 have been incredibly influential in visual-language models. Imagen, while highly impressive, definitely is a catch-up piece of work (as was PaLM-540b).

Google clearly demonstrates their unrivaled capability to leverage massive quantities of data and compute, but it’s premature to declare that they’ve secured victory in the AI Wars.

◧◩◪
3. benree+Ap[view] [source] 2022-05-23 23:31:27
>>dougab+bp
I agree that it’s still a jump ball in a rapidly moving field, I was saying Google is far ahead, not that they’ve won.

And I don’t think whatever iteration of PaLM was cooking at the time GPT-3 started getting press would have looked to shabby.

I think Google crushed OpenAI on both GPT and DALL-E in short order because OpenAI published twice and someone had had enough.

◧◩◪◨
4. alphab+lr[view] [source] 2022-05-23 23:46:46
>>benree+Ap
OpenAI and FAIR are definitely in the same league as Google but Google has been all-in on AI from the beginning. They've probably spent well over $100B on AI research. I really enjoyed the Genius Makers book which came out last year from an NYT reporter on history of ML race. Deepmind apparently turned down a FB offer of double what Google was offering.
◧◩◪◨⬒
5. benree+Ir[view] [source] 2022-05-23 23:50:17
>>alphab+lr
Cade Metz is that author and most of it I can only speculate on.

The bits and pieces I saw first hand tie out reasonably well with that account.

[go to top]