zlacker

[parent] [thread] 5 comments
1. cornho+(OP)[view] [source] 2023-11-20 08:48:47
I'm not sure I follow this chain of arguments, which I hear often. So, a technology becomes possible, that has the potential to massively disrupt social order - while being insanely profitable to those who employ it. The knowledge is already out there in scientific journals, or if it's not, it can be grokked via corporate espionage or paying huge salaries to the employees of OpenAI or whoever else has it.

What exactly can a foundation in charge of OpenAI do to prevent this unethical use of the technology? If OpenAI refuses to use it to some unethical goal, what prevents other, for profit enterprises, from doing the same? How can private actors stop this without government regulation?

Sounds like Truman's apocryphal "the Russian's will never have the bomb". Well, they did, just 4 years later.

replies(3): >>cyanyd+ju >>bart_s+cC >>ricard+gQ1
2. cyanyd+ju[view] [source] 2023-11-20 12:05:45
>>cornho+(OP)
in theory, a nonprofit would demonstrate a government need and the nonprofit would be bought out by the government.

in America, nonprofits are just how rich people run around trying to get tax avoidance, plaudettes and now wealth transfers.

I doubt OpenAI is different not that Altman is anything but a figurehead.

but nonprofits in America is how the government has chosen to derelict it's duties.

replies(1): >>mlrtim+rx
◧◩
3. mlrtim+rx[view] [source] [discussion] 2023-11-20 12:28:20
>>cyanyd+ju
In your world yes, but in another, nonprofits are able to work in research that the Government should not, cannot or is too inefficient at ever getting working.

I'm no embarrased billionaire, but there is a place for both.

4. bart_s+cC[view] [source] 2023-11-20 12:57:40
>>cornho+(OP)
I think the last couple decades have demonstrated the dangers of corporate leadership beholden to whims of shareholders. Jack Welch-style management where the quarterly numbers always go up at the expense of the employee, the company, and the customer has proven to be great at building a house of cards that stands just long enough for select few to make fortunes before collapsing. In the case of companies like GE or Boeing, the fallout is the collapse of the company or a “few” hundred people losing their lives in place crashes. In the case of AI, the potential for societal-level destructive consequences is higher.

A non-profit is not by any means guaranteed to avoid the dangers of AI. But at a minimum it will avoid the greed-driven myopia that seems to be the default when companies are beholden to Wall Street shareholders.

replies(1): >>robert+CD1
◧◩
5. robert+CD1[view] [source] [discussion] 2023-11-20 17:37:50
>>bart_s+cC
I don't think cherry-picked examples mean much. But even so, you don't seem to be answering the question, which was "how will being a non-profit stop other people behaving unethically?"
6. ricard+gQ1[view] [source] 2023-11-20 18:21:46
>>cornho+(OP)
Look up the reason OpenAI was founded. The idea was exactly that someone would get there first, and it better be an entity with beneficial goals. So they set it up to advance the field - which they have been doing successfully - while having a strict charter that would ensure alignment with humanity (aka prevent it from becoming a profit-driven enterprise).
[go to top]