zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. aleksa+jz1[view] [source] 2024-03-01 19:48:56
>>modele+(OP)
> Indeed, as the November 2023 drama was unfolding, Microsoft’s CEO boasted that it would not matter “[i]f OpenAI disappeared tomorrow.” He explained that “[w]e have all the IP rights and all the capability.” “We have the people, we have the compute, we have the data, we have everything.” “We are below them, above them, around them.”

Yikes.

This technology definitely needs to be open source, especially if we get to the point of AGI. Otherwise Microsoft and OpenAI are going to exploit it for as long as they can get away with it for profit, while open source lags behind.

Reminds me of the moral principles that guided Zimmermann when he made PGP free for everyone: A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

◧◩
2. reduce+yC1[view] [source] 2024-03-01 20:07:07
>>aleksa+jz1
> A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

That's why we all have personal nukes, of course. Very safe

◧◩◪
3. archag+eD1[view] [source] 2024-03-01 20:11:01
>>reduce+yC1
I shudder at a world where only corporations had nukes.
◧◩◪◨
4. chasd0+3I1[view] [source] 2024-03-01 20:40:15
>>archag+eD1
nuclear weapons is a ridiculous comparison and only furthers the gas lighting of society. At the barest of bare minimums, AI might, possibly, theoretically, perhaps pose a threat to established power structures (like any disruptive technology does). However, a nuclear weapon definitely destroys physical objects within its effective range. Relating the two is ridiculous.
◧◩◪◨⬒
5. reduce+mP3[view] [source] 2024-03-02 18:34:39
>>chasd0+3I1
It's not a ridiculous comparison. This thread involves Sam Altman and Elon Musk, right?

Sam Altman:"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

In the essay "Why You Should Fear Machine Intelligence" https://blog.samaltman.com/machine-intelligence-part-1

So, more than nukes then...

Elon Musk: "There’s a strong probability that it [AGI] will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity."

[go to top]