zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. discmo+gF[view] [source] 2022-05-24 01:46:30
>>kevema+(OP)
For people complaining that they can't play with the model... I work at Google and I also can't play with the model :'(
◧◩
2. interb+BA1[view] [source] 2022-05-24 11:32:42
>>discmo+gF
I think they address some of the reasoning behind this pretty clearly in the write-up as well?

> The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.

I can see the argument here. It would be super fun to test this model's ability to generate arbitrary images, but "arbitrary" also contains space for a lot of distasteful stuff. Add in this point:

> While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.

That said, I hope they're serious about the "framework for responsible externalization" part, both because it would be really fun to play with this model and because it would be interesting to test it outside of their hand-picked examples.

◧◩◪
3. Orange+Bx4[view] [source] 2022-05-25 08:54:51
>>interb+BA1
I find this is a really bad precedent. Not far from now they'll achieve "super human" level general AI, and be like "yeah it's too powerful for you, we'll keep it internal".
◧◩◪◨
4. atleta+7C7[view] [source] 2022-05-26 02:55:02
>>Orange+Bx4
This is definitely how it will play out. Whoever creates AGI first (and second). After all, they are investing a lot of money and resources and a "super human" AGI is very likely worth an unimaginably lot.

Also, given the processing power and data requirements to create one, there are only a few candidates out there who can get there firstish.

[go to top]