zlacker

[return to "MobileDiffusion: Rapid text-to-image generation on-device"]
1. minima+s9[view] [source] 2024-01-31 23:36:39
>>jasond+(OP)
> With superior efficiency in terms of latency and size, MobileDiffusion has the potential to be a very friendly option for mobile deployments given its capability to enable a rapid image generation experience while typing text prompts. And we will ensure any application of this technology will be in-line with Google’s responsible AI practices.

So I'm interpreting this that it won't ever get released.

◧◩
2. kccqzy+Vj[view] [source] 2024-02-01 00:58:45
>>minima+s9
I'm interpreting it as they will be adding a layer of safety restrictions. Understandable given the furore of the recent Taylor Swift generated image incident.

Everyone needs to do this and probably is already doing this. Search for "ChatGPT lobotomized" and you'll see plenty of complaints about the safety filters added by OpenAI.

◧◩◪
3. babysh+bM[view] [source] 2024-02-01 07:05:17
>>kccqzy+Vj
I'm much more comfortable with the idea of AI watermarking images it creates instead of refusing to create images because of "safety", which in practice more often means not wanting to offend anyone. Imagine if word processors like Google Docs refused to write things you wanted to write because of mature themes. The important thing, in my opinion, is to make it a lot more difficult to pass off AI generated content as being authentic and to make provenance traceable if you were to do something like create revenge porn with AI, but not to make AI refuse to create explicit material at all.
◧◩◪◨
4. AlecSc+361[view] [source] 2024-02-01 11:00:07
>>babysh+bM
It being authentic or not isn't actually important in a lot of cases though. Consider someone like Mia Janin, who recently took her own life after been harassed using deepfakes. Everyone understood that the images weren't "authentic" but their power to cause distress was very real.
◧◩◪◨⬒
5. postal+nm1[view] [source] 2024-02-01 13:29:10
>>AlecSc+361
The number of people who committed suicide after being harassed with memes or emoji must be higher than those harassed with deep fakes. Too bad nobody is interested enough in banning emoji to do a study.
◧◩◪◨⬒⬓
6. spangr+bs1[view] [source] 2024-02-01 14:08:55
>>postal+nm1
Why stop there? I think we'd all agree that "mean words", either written or spoken, have immense power to "cause distress" and have driven many a person to suicide. We should ban those.
◧◩◪◨⬒⬓⬔
7. AlecSc+Ct1[view] [source] 2024-02-01 14:15:15
>>spangr+bs1
We do. Incitement to violence or "true threats" for example already fall outside of 1st amendment protections. I personally see deepfakes created or disseminated for harassment purposes as an act of violence.
[go to top]