zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. qz_kb+AI[view] [source] 2022-05-24 02:21:16
>>kevema+(OP)
I have to wonder how much releasing these models will "poison the well" and fill the internet with AI generated images that make training an improved model difficult. After all if every 9/10 "oil painted" image online starts being from these generative models it'll become increasingly difficult to scrape the web and to learn from real world data in a variety of domains. Essentially once these things are widely available the internet will become harder to scrape for good data and models will start training on their own output. The internet will also probably get worse for humans since search results will be completely polluted with these "sort of realistic" images which can ultimately be spit out at breakneck speed by smashing words from a dictionary together...
◧◩
2. agar+KP[view] [source] 2022-05-24 03:49:07
>>qz_kb+AI
The irony is that when the majority of content becomes computer-generated, most of that content will also be computer-consumed.

Neil Stephenson covered this briefly in "Fall; or Dodge In Hell." So much 'net content was garbage, AI-generated, and/or spam that it could only be consumed via "editors" (either AI or AI+human, depending on your income level) that separated the interesting sliver of content from...everything else.

◧◩◪
3. jilles+rd1[view] [source] 2022-05-24 07:56:42
>>agar+KP
He was definitely onto something in that book where people also resort to using blockchains to fingerprint their behavior and build an unbreakable chain of authenticity. Later in that book that is used to authorize the hardware access of the deceased and uploaded individuals.

A bit far out there in terms of plot but the notion of authenticating based on a multitude of factors and fingerprints is not that strange. We've already started doing that. It's just that we currently still consume a lot of unsigned content from all sorts of unreliable/untrustworthy sources.

Fake news stops being a thing as soon as you stop doing that. Having people sign off on and vouch for content needs to start becoming a thing. I might see Joe Biden saying stuff in a video on Youtube. But how do I know if that's real or not?

With deep fakes already happening, that's no longer an academic question. The answer is that you can't know. Unless people sign the content. Like Joe Biden, any journalists involved, etc. You might still not know 100% it is real but you can know whether relevant people signed off on it or not and then simply ignore any unsigned content from non reputable sources. Reputations are something we can track using signatures, blockchains, and other solutions.

Interesting with Neal Stephenson that he presents a problem and a possible solution in that book.

◧◩◪◨
4. dirkc+7A1[view] [source] 2022-05-24 11:27:44
>>jilles+rd1
> blockchains to fingerprint their behavior and build an unbreakable chain of authenticity. Later in that book that is used to authorize the hardware access of the deceased and uploaded individuals.

maybe I misunderstood, but I had it that people used generative AI models that would transform the media they produced. The generated content can be uniquely identified, but the creator (or creators) retains anonymity. Later these generative AI models morphed into a form of identity since they could be accurately and uniquely identified.

[go to top]