zlacker

[parent] [thread] 4 comments
1. ma2rte+(OP)[view] [source] 2022-05-23 22:38:45
Google is very conservative about anything that can generate open-ended outputs. Also these models are still very expensive computationally.
replies(1): >>london+R3
2. london+R3[view] [source] 2022-05-23 23:08:09
>>ma2rte+(OP)
They're expensive to train, but not awfully expensive to use. Especially if you have hundreds of images you want to generate (due to the way compute devices tend to get much more efficiency with a large batch size).

Google could totally afford it, especially if the feature was hidden behind a button the user had to click, and not just run for every image search.

replies(2): >>tpmx+n6 >>zone41+m9
◧◩
3. tpmx+n6[view] [source] [discussion] 2022-05-23 23:27:11
>>london+R3
The input control is pretty hard - it kinda needs an AGI :). How do you stop undesirable images being created?
replies(1): >>london+4I
◧◩
4. zone41+m9[view] [source] [discussion] 2022-05-23 23:52:47
>>london+R3
When diffusion models are used, the inference time could be meaningful. But then this is only 64x64 with upsampling, so probably not too bad.
◧◩◪
5. london+4I[view] [source] [discussion] 2022-05-24 05:49:44
>>tpmx+n6
If I were running Google, I would release it with a disclaimer, and not do anything technical to prevent undesirable images being created.

How does Adobe prevent Photoshop being used to draw offensive images? They don't... People understand that a tool can be used for good and bad.

[go to top]