zlacker

[parent] [thread] 11 comments
1. jandre+(OP)[view] [source] 2022-05-23 21:29:22
Is there a way to try this out? DALL-E2 also had amazing demos but the limitations became apparent once real people had a chance to run their own queries.
replies(1): >>wmfrov+R1
2. wmfrov+R1[view] [source] 2022-05-23 21:38:59
>>jandre+(OP)
Looks like no, "The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access."
replies(1): >>nomel+x2
◧◩
3. nomel+x2[view] [source] [discussion] 2022-05-23 21:43:09
>>wmfrov+R1
> the risks of unrestricted open-access

What exactly is the risk?

replies(5): >>tpmx+C6 >>jimmyg+I7 >>varenc+q9 >>jtvjan+Ea >>SamBam+un
◧◩◪
4. tpmx+C6[view] [source] [discussion] 2022-05-23 22:06:03
>>nomel+x2
Really unpleasant content being produced, obviously.
◧◩◪
5. jimmyg+I7[view] [source] [discussion] 2022-05-23 22:11:24
>>nomel+x2
A variation on the axiom "you cannot idiot proof something because there's always a bigger idiot"
◧◩◪
6. varenc+q9[view] [source] [discussion] 2022-05-23 22:22:17
>>nomel+x2
See section 6 titled “Conclusions, Limitations and Societal Impact” in the research paper: https://gweb-research-imagen.appspot.com/paper.pdf

One quote:

> “On the other hand, generative methods can be leveraged for malicious purposes, including harassment and misinformation spread [20], and raise many concerns regarding social and cultural exclusion and bias [67, 62, 68]”

replies(1): >>userbi+rb
◧◩◪
7. jtvjan+Ea[view] [source] [discussion] 2022-05-23 22:29:40
>>nomel+x2
If the model is used to generate offensive imagery, it may result in a negative press response directed at the company.
◧◩◪◨
8. userbi+rb[view] [source] [discussion] 2022-05-23 22:34:33
>>varenc+q9
But do we trust that those who do have access won't be using it for "malicious purposes" (which they might not think is malicious, but perhaps it is to those who don't have access)?
replies(1): >>colinm+4e
◧◩◪◨⬒
9. colinm+4e[view] [source] [discussion] 2022-05-23 22:53:22
>>userbi+rb
It's not up to you. It's up to them, and they trust themselves/don't care about your definition of malicious.
◧◩◪
10. SamBam+un[view] [source] [discussion] 2022-05-24 00:10:11
>>nomel+x2
"Make a photograph of Joe Biden in a hotel room bed with Kim Jong-un."

Simply the ease at which people are going to be able to make extremely-realistic game photographs is going to do some damage to the world. It's inevitable, but it might be good to postpone it.

replies(2): >>lawxls+3H1 >>nomel+L46
◧◩◪◨
11. lawxls+3H1[view] [source] [discussion] 2022-05-24 13:04:56
>>SamBam+un
The counter argument is that, by the time these models become available to the public, they will produce output that cannot be distinguished from real photos, so the damage will be even greater than if they became available today
◧◩◪◨
12. nomel+L46[view] [source] [discussion] 2022-05-25 18:12:34
>>SamBam+un
> able to make extremely-realistic game photographs is going to do some damage to the world

I don't understand why. If someone has gone to a blockbuster movie in the last 15 years, they're very familiar with the concept of making people, sets, and entire worlds, that don't exist, with photorealistic accuracy. Being able to make fictitious photorealistic images isn't remotely a new ability, it's just an ability that's now automated.

If this is released, I think any damage would be extremely fleeting, as people pumped out thousands of these images, and people grow bored of them. The only danger is making this ability (to make false images) seem new (absolutely not) or rare (not anymore)!

[go to top]