zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. tptace+Rf[view] [source] 2024-05-17 17:42:03
>>jnnnth+(OP)
So... this is relevant if you think AGI superintelligence is a thing, and not so clearly relevant otherwise?
◧◩
2. llamai+qg[view] [source] 2024-05-17 17:45:28
>>tptace+Rf
What does "if you think AGI superintelligence is a thing" mean? If you believe it exists currently, could exist, will exist, or something else?
◧◩◪
3. tptace+Kg[view] [source] 2024-05-17 17:47:47
>>llamai+qg
Any of the above. Maybe, if you really want to put some unneeded extra rigor on the analysis, add "within the lifetime of anyone born in 2024".

I don't want to, like, have the argument here about it. Nobody will persuade anybody of anything. But it is not a truth universally acknowledged that AGI superintelligence is actually a thing.

There are other reasons to have qualms about OpenAI! They could be misleading the market about their capabilities. They could be abusing IP. They could be ignoring clear and obvious misuse cases, the same way Facebook/Meta did in South Asia.

But this exit statement seems to be much more about AGI superintelligence than any of that stuff. OK, so if I don't think that's a thing, I don't have to pay attention, right?

◧◩◪◨
4. Tenoke+9B[view] [source] 2024-05-17 20:06:02
>>tptace+Kg
There's a lot of things posted on HN that I dont need to and end up not paying attention to. I respect you, but these comments of how you dont need to pay attention to this topic are at best unproductive.
[go to top]