zlacker

[return to "Sam Altman, Greg Brockman and others to join Microsoft"]
1. shubha+e4[view] [source] 2023-11-20 08:14:03
>>JimDab+(OP)
Have to give it to Satya. There's a thin possibility that Microsoft would have to write-off its whole $10B (or more?) investment in OpenAI, but that isn't Satya's focus. The focus is on what he can do next. Maybe, recruit the most formidable AI team in the world, removed from the shackles of an awkward non-profit owning a for-profit company? Give enough (cash) incentives and most of OpenAI employees would have no qualms about following Sam and Greg. It will take time for sure, but Microsoft can now capture even a bigger slice of THE FUTURE than it was possible with OpenAI investment.
◧◩
2. ricard+q5[view] [source] 2023-11-20 08:19:25
>>shubha+e4
And this kind of thinking seems to be the exact reason he was pushed away. “The future” as envisioned by a megacorp might not be that great.
◧◩◪
3. cornho+5c[view] [source] 2023-11-20 08:48:47
>>ricard+q5
I'm not sure I follow this chain of arguments, which I hear often. So, a technology becomes possible, that has the potential to massively disrupt social order - while being insanely profitable to those who employ it. The knowledge is already out there in scientific journals, or if it's not, it can be grokked via corporate espionage or paying huge salaries to the employees of OpenAI or whoever else has it.

What exactly can a foundation in charge of OpenAI do to prevent this unethical use of the technology? If OpenAI refuses to use it to some unethical goal, what prevents other, for profit enterprises, from doing the same? How can private actors stop this without government regulation?

Sounds like Truman's apocryphal "the Russian's will never have the bomb". Well, they did, just 4 years later.

◧◩◪◨
4. bart_s+hO[view] [source] 2023-11-20 12:57:40
>>cornho+5c
I think the last couple decades have demonstrated the dangers of corporate leadership beholden to whims of shareholders. Jack Welch-style management where the quarterly numbers always go up at the expense of the employee, the company, and the customer has proven to be great at building a house of cards that stands just long enough for select few to make fortunes before collapsing. In the case of companies like GE or Boeing, the fallout is the collapse of the company or a “few” hundred people losing their lives in place crashes. In the case of AI, the potential for societal-level destructive consequences is higher.

A non-profit is not by any means guaranteed to avoid the dangers of AI. But at a minimum it will avoid the greed-driven myopia that seems to be the default when companies are beholden to Wall Street shareholders.

◧◩◪◨⬒
5. robert+HP1[view] [source] 2023-11-20 17:37:50
>>bart_s+hO
I don't think cherry-picked examples mean much. But even so, you don't seem to be answering the question, which was "how will being a non-profit stop other people behaving unethically?"
[go to top]