>>ricard+q5
I'm not sure I follow this chain of arguments, which I hear often. So, a technology becomes possible, that has the potential to massively disrupt social order - while being insanely profitable to those who employ it. The knowledge is already out there in scientific journals, or if it's not, it can be grokked via corporate espionage or paying huge salaries to the employees of OpenAI or whoever else has it.
What exactly can a foundation in charge of OpenAI do to prevent this unethical use of the technology? If OpenAI refuses to use it to some unethical goal, what prevents other, for profit enterprises, from doing the same? How can private actors stop this without government regulation?
Sounds like Truman's apocryphal "the Russian's will never have the bomb". Well, they did, just 4 years later.