zlacker

[return to "MobileDiffusion: Rapid text-to-image generation on-device"]
1. minima+s9[view] [source] 2024-01-31 23:36:39
>>jasond+(OP)
> With superior efficiency in terms of latency and size, MobileDiffusion has the potential to be a very friendly option for mobile deployments given its capability to enable a rapid image generation experience while typing text prompts. And we will ensure any application of this technology will be in-line with Google’s responsible AI practices.

So I'm interpreting this that it won't ever get released.

◧◩
2. kccqzy+Vj[view] [source] 2024-02-01 00:58:45
>>minima+s9
I'm interpreting it as they will be adding a layer of safety restrictions. Understandable given the furore of the recent Taylor Swift generated image incident.

Everyone needs to do this and probably is already doing this. Search for "ChatGPT lobotomized" and you'll see plenty of complaints about the safety filters added by OpenAI.

◧◩◪
3. babysh+bM[view] [source] 2024-02-01 07:05:17
>>kccqzy+Vj
I'm much more comfortable with the idea of AI watermarking images it creates instead of refusing to create images because of "safety", which in practice more often means not wanting to offend anyone. Imagine if word processors like Google Docs refused to write things you wanted to write because of mature themes. The important thing, in my opinion, is to make it a lot more difficult to pass off AI generated content as being authentic and to make provenance traceable if you were to do something like create revenge porn with AI, but not to make AI refuse to create explicit material at all.
◧◩◪◨
4. AlecSc+361[view] [source] 2024-02-01 11:00:07
>>babysh+bM
It being authentic or not isn't actually important in a lot of cases though. Consider someone like Mia Janin, who recently took her own life after been harassed using deepfakes. Everyone understood that the images weren't "authentic" but their power to cause distress was very real.
◧◩◪◨⬒
5. theult+gc1[view] [source] 2024-02-01 12:01:34
>>AlecSc+361
Is there a difference between being harassed with deepfakes vs fakes?

Photoshop has the power to cause distress too when used maliciously.

You go after the aggressors, not the tool used for aggression.

◧◩◪◨⬒⬓
6. AlecSc+Gd1[view] [source] 2024-02-01 12:14:46
>>theult+gc1
Ease of use and accessibility. Think of how we control access to guns even though a baseball bat could also be used to kill or maim someone.

I agree that we should legislate against the aggressors, that's why I'm pointing out the limitations of technical solutions like watermarks. We need extensions to things like revenge pornography laws, if we're talking about legislation, and I don't see any harm in outlawing services that automate the creation of deepfakes.

Of course the only "solution" is that we would universally behind to teach young boys that they are not entitled to women's bodies or their sexuality, but so many grown men apparently disagree that I can't see it happening quickly enough.

◧◩◪◨⬒⬓⬔
7. spangr+4l1[view] [source] 2024-02-01 13:18:08
>>AlecSc+Gd1
I'm one of the grown men who disagree. I don't think treating half the population as pre-criminals, when in reality it's an extremely tiny minority who act in this way, is a particularly good solution. If we were to apply this kind of "solution" to all undesirable behaviours exhibited by deviant minorities of both men and women I doubt there'd be any time for any actual K-12 formal education.

It's a nice applause line though.

◧◩◪◨⬒⬓⬔⧯
8. AlecSc+0n1[view] [source] 2024-02-01 13:34:46
>>spangr+4l1
So no to technical solutions, no to legislative solutions and no to education. What do you suggest?

Edit: you disagree that men aren't entitled to women's sexuality?

Edit: I mis-interpreted what was being disagree with.

[go to top]