> The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.
I can see the argument here. It would be super fun to test this model's ability to generate arbitrary images, but "arbitrary" also contains space for a lot of distasteful stuff. Add in this point:
> While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
That said, I hope they're serious about the "framework for responsible externalization" part, both because it would be really fun to play with this model and because it would be interesting to test it outside of their hand-picked examples.
So we can't have this model because of ... the mere possibility of stereotypes? With this logic, humans should all die, as we certainly encode some nasty stereotypes in our brains.
This level of dishonesty to not give back to the community is not unexpected at this point, but seeing apologists here is.
Which means that it is always you that decides is you'll be offended or not.
Not to mention the weirdness that random strangers on the internet feel the need to protect me, another random stranger on the internet, from being offended. Not to mention that you don't need to be a genius to find pornography, racism and pretty much anything on the internet...
I'm really quite worried by the direction it's all going at. More and more the internet is being censored and filtered. Where are the times of IRC where a single refresh erased everything that was said~
I have a friend who used to have an abuser who talked like that. Every time she said or did something that hurt him, it was his fault for feeling that way, and a real man wouldn't have any problem with it.
I'm all for mindfulness and metacognition as valuable skills. They helped me realize that a bad grade every now and then didn't mean I was lazy, stupid, and didn't belong in college.
But this argument that people should indiscriminately suppress emotional pain is dangerous. It entails that people ought to tolerate abuse and misuse of themselves and of other people. And that's wrong.