zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. benree+al[view] [source] 2022-05-23 22:55:39
>>kevema+(OP)
I apologize in advance for the elitist-sounding tone. In my defense the people I’m calling elite I have nothing to do with, I’m certainly not talking about myself.

Without a fairly deep grounding in this stuff it’s hard to appreciate how far ahead Brain and DM are.

Neither OpenAI nor FAIR ever has the top score on anything unless Google delays publication. And short of FAIR? D2 lacrosse. There are exceptions to such a brash generalization, NVIDIA’s group comes to mind, but it’s a very good rule of thumb. Or your whole face the next time you are tempted to doze behind the wheel of a Tesla.

There are two big reasons for this:

- the talent wants to work with the other talent, and through a combination of foresight and deep pockets Google got that exponent on their side right around the time NVIDIA cards started breaking ImageNet. Winning the Hinton bidding war clinched it.

- the current approach of “how many Falcon Heavy launches worth of TPU can I throw at the same basic masked attention with residual feedback and a cute Fourier coloring” inherently favors deep pockets, and obviously MSFT, sorry OpenAI has that, but deep pockets also non-linearly scale outcomes when you’ve got in-house hardware for multiply-mixed precision.

Now clearly we’re nowhere close to Maxwell’s Demon on this stuff, and sooner or later some bright spark is going to break the logjam of needing 10-100MM in compute to squeeze a few points out of a language benchmark. But the incentives are weird here: who, exactly, does it serve for us plebs to be able to train these things from scratch?

◧◩
2. meowfa+Rn[view] [source] 2022-05-23 23:16:09
>>benree+al
Not elitist at all; I highly appreciate this post. I know the basics of ML but otherwise am clueless when it comes to the true depths of this field and it's interesting to hear this perspective.
◧◩◪
3. benree+1q[view] [source] 2022-05-23 23:35:01
>>meowfa+Rn
I used a lot of jargon and lingo and inside baseball in that post, it was intended for people who have deep background.

But if you’re interested I’m happy to (attempt) answers to anything that was jargon: by virtue of HN my answers will be peer-reviewed in real time, and with only modest luck, a true expert might chime in.

◧◩◪◨
4. blindi+1y[view] [source] 2022-05-24 00:41:25
>>benree+1q
Is there a handy list of generally recognized AI advancements, and their owners, that you would recommend reviewing? Or perhaps, seminal papers published? I'm only tangentially familiar with the field but would be curious to learn about the clash of the Titans playing out. Thanks!
◧◩◪◨⬒
5. benree+7C[view] [source] 2022-05-24 01:19:09
>>blindi+1y
That’s too big a question to even attempt an answer in an HN comment, but to try to answer a realistic subset of it: “Attention is All You Need” in like 2017 is the paper most germane to my remark, and probably the thread. The modeling style it introduced often gets called a “transformer”.

The TLDR is that people had been trying for ages to capture long-distance (in the input or output, not the black box) relationships in a way that was amenable to traditional neural-network training techniques, which is non-obvious how to do because your basic NN takes an input without a distance metric, or put more plainly: it can know all the words in a sentence but struggles with what order they are in without some help.

The state of the art for awhile was something called an LSTM, and those gadgets are still useful sometimes, but have mostly been obsoleted by this attention/transformer business.

That paper had a number of cool things in it but two stand out:

- by blinding an NN to some parts of the input (“masking”) you can incentivize/compel it to look at (“attend to”) others. That’s a gross oversimplification, but it gets the gist of it I think. People have come up with very clever ways to boost up this or that part of the input in a context-dependent way.

- by playing with some trigonometry you can get a unique shape that came be expressed as a sun on something else that gives the model its “bearings” so to speak as to “where” it is in the input. such a word is closer to the beginning of a paragraph sort of a thing. people have also gotten very clever about how to do this, but the idea is the same: how do I tell a neural network that there’s structure in what would otherwise be a pile of numbers.

[go to top]