zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. therei+4H[view] [source] 2024-03-01 15:25:52
>>modele+(OP)
Allowing startups to begin as non-profits for tax benefits, only to 'flip' into profit-seeking ventures is a moral hazard, IMO. It risks damaging public trust in the non-profit sector as a whole. This lawsuit is important
◧◩
2. permo-+IQ[view] [source] 2024-03-01 16:19:04
>>therei+4H
I completely agree. AGI is an existential threat, but the real meat of this lawsuit is ensuring that you can't let founders have their cake and eat it like this. what's the point of a non-profit if they can simply pivot to making profit the second they have something of value? the answer is that there is none, besides dishonesty.

it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire

◧◩◪
3. nradov+jU[view] [source] 2024-03-01 16:36:18
>>permo-+IQ
There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.
◧◩◪◨
4. permo-+yZ[view] [source] 2024-03-01 17:00:12
>>nradov+jU
you're aware of what a threat is, I presume? a threat is not something that is reliably proven; it is a possibility. there are endless possibilities for how AGI could be an existential threat, and many of them of are extremely plausible, not just to me, but to many experts in the field who often literally have something to lose by expressing those opinions.

>no one has shown that they represent steps towards a true AGI.

this is completely irrelevant. there is no solid definition for intelligence or consciousness, never mind artificial intelligence and/or consciousness. there is no way to prove such a thing without actually being that consciousness. all we have are inputs and outputs. as of now, we do not know whether stringing together incredibly complex neural networks to produce information does not in fact produce a form of consciousness, because we do not live in those networks, and we simply do not know what consciousness is.

is it achievable in our lifetimes or not? well, even if it isn't, which I find deeply unlikely, it's very silly to just handwave and say "yeah we should just be barrelling towards this willy nilly because it's probably not a threat and it'll never happen anyway"

[go to top]