zlacker

[parent] [thread] 19 comments
1. discmo+(OP)[view] [source] 2022-05-24 01:46:30
For people complaining that they can't play with the model... I work at Google and I also can't play with the model :'(
replies(8): >>make3+51 >>arthur+P3 >>karmas+2i >>hathym+Sy >>octoco+AC >>interb+lV >>Semant+aG1 >>kvetch+kL1
2. make3+51[view] [source] 2022-05-24 01:56:15
>>discmo+(OP)
I mean I don't know how that makes it any better from a reproducibility stand point lol
3. arthur+P3[view] [source] 2022-05-24 02:27:28
>>discmo+(OP)
How does that make you feel?
replies(1): >>quickt+Es
4. karmas+2i[view] [source] 2022-05-24 05:12:42
>>discmo+(OP)
I mean inference on this cost not small money.

I don't think they would host this for fun then.

◧◩
5. quickt+Es[view] [source] [discussion] 2022-05-24 07:02:22
>>arthur+P3
Probably like an employee
6. hathym+Sy[view] [source] 2022-05-24 08:03:58
>>discmo+(OP)
off-topic: as a google employee do you have unlimited gce credits?
7. octoco+AC[view] [source] 2022-05-24 08:36:43
>>discmo+(OP)
is your team/division hiring?
replies(1): >>Firmwa+4B2
8. interb+lV[view] [source] 2022-05-24 11:32:42
>>discmo+(OP)
I think they address some of the reasoning behind this pretty clearly in the write-up as well?

> The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.

I can see the argument here. It would be super fun to test this model's ability to generate arbitrary images, but "arbitrary" also contains space for a lot of distasteful stuff. Add in this point:

> While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.

That said, I hope they're serious about the "framework for responsible externalization" part, both because it would be really fun to play with this model and because it would be interesting to test it outside of their hand-picked examples.

replies(2): >>Siira+Xc2 >>Orange+lS3
9. Semant+aG1[view] [source] 2022-05-24 15:44:46
>>discmo+(OP)
Is brocoliman standing in the way? :(
10. kvetch+kL1[view] [source] 2022-05-24 16:09:27
>>discmo+(OP)
Good thing there is a company committed to Open Sourcing these sorts of AI models.

Oh wait.

Google: "it's too dangerous to release to the public"

OpenAI: "we are committed to open source AGI but this model is too dangerous to release to the public"

◧◩
11. Siira+Xc2[view] [source] [discussion] 2022-05-24 18:15:39
>>interb+lV
> harmful stereotypes and representations

So we can't have this model because of ... the mere possibility of stereotypes? With this logic, humans should all die, as we certainly encode some nasty stereotypes in our brains.

This level of dishonesty to not give back to the community is not unexpected at this point, but seeing apologists here is.

replies(2): >>MattSa+Sk2 >>LoveMo+vF3
◧◩◪
12. MattSa+Sk2[view] [source] [discussion] 2022-05-24 18:54:26
>>Siira+Xc2
I think it's more that they don't want people creating NSFW images of copyrighted material. How do you even begin to protect against that litigation?
replies(1): >>fomine+qt3
◧◩
13. Firmwa+4B2[view] [source] [discussion] 2022-05-24 20:20:25
>>octoco+AC
Every tech megacorp is always hiring people who can jump through the flaming code hoops just right
◧◩◪◨
14. fomine+qt3[view] [source] [discussion] 2022-05-25 04:00:17
>>MattSa+Sk2
Should we ban Photoshop? It's a leap in logic but not very different.
◧◩◪
15. LoveMo+vF3[view] [source] [discussion] 2022-05-25 06:28:02
>>Siira+Xc2
As Sadhguru said, the human experience comes from within.

Which means that it is always you that decides is you'll be offended or not.

Not to mention the weirdness that random strangers on the internet feel the need to protect me, another random stranger on the internet, from being offended. Not to mention that you don't need to be a genius to find pornography, racism and pretty much anything on the internet...

I'm really quite worried by the direction it's all going at. More and more the internet is being censored and filtered. Where are the times of IRC where a single refresh erased everything that was said~

replies(1): >>JellyB+oJ3
◧◩◪◨
16. JellyB+oJ3[view] [source] [discussion] 2022-05-25 07:12:50
>>LoveMo+vF3
> As Sadhguru said, the human experience comes from within. > > Which means that it is always you that decides is you'll be offended or not.

I have a friend who used to have an abuser who talked like that. Every time she said or did something that hurt him, it was his fault for feeling that way, and a real man wouldn't have any problem with it.

I'm all for mindfulness and metacognition as valuable skills. They helped me realize that a bad grade every now and then didn't mean I was lazy, stupid, and didn't belong in college.

But this argument that people should indiscriminately suppress emotional pain is dangerous. It entails that people ought to tolerate abuse and misuse of themselves and of other people. And that's wrong.

replies(1): >>ThePC0+004
◧◩
17. Orange+lS3[view] [source] [discussion] 2022-05-25 08:54:51
>>interb+lV
I find this is a really bad precedent. Not far from now they'll achieve "super human" level general AI, and be like "yeah it's too powerful for you, we'll keep it internal".
replies(2): >>dekhn+Lz4 >>atleta+RW6
◧◩◪◨⬒
18. ThePC0+004[view] [source] [discussion] 2022-05-25 10:17:59
>>JellyB+oJ3
I think there is a huge difference between somebody willingly mistreating another person and that person taking offence to that, versus a company releasing an AI tool with absolutely no ill-intent, and then someone else making decisions as to what _I_ am allowed to see.
◧◩◪
19. dekhn+Lz4[view] [source] [discussion] 2022-05-25 14:07:37
>>Orange+lS3
wouldn't that be a good motivation to work there (and achieve a high enough position to have access to the model)?
◧◩◪
20. atleta+RW6[view] [source] [discussion] 2022-05-26 02:55:02
>>Orange+lS3
This is definitely how it will play out. Whoever creates AGI first (and second). After all, they are investing a lot of money and resources and a "super human" AGI is very likely worth an unimaginably lot.

Also, given the processing power and data requirements to create one, there are only a few candidates out there who can get there firstish.

[go to top]