zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. jvande+P4[view] [source] 2024-05-17 15:45:53
>>robbie+(OP)
Honestly, having a "Long term AI risk" team is a great idea for an early stage startup claiming to build General AI. It looks like they are taking the mission and risks seriously.

But for a product-focused LLM shop trying to infuse into everything, it makes sense to tone down the hype.

◧◩
2. nprate+J9[view] [source] 2024-05-17 16:14:37
>>jvande+P4
It makes it look like the tech is so rad it's dangerous. Total bollocks, but great marketing.
◧◩◪
3. reduce+Zb[view] [source] 2024-05-17 16:27:35
>>nprate+J9
Ilya and Jan Leike[0] resigned (were fired) because they believed their jobs were a temporary marketing expense? Or maybe you think you understand the risks of AGI better than them, the creators of the frontier models?

Do you think this is a coherent world view? Compared to the other one staring you in the face? I'll leave it to the reader whether they want to believe this conspiratorial take in line with profit-motive instead of the scientists saying:

“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

[0] https://scholar.google.co.uk/citations?user=beiWcokAAAAJ&hl=...

◧◩◪◨
4. nprate+bf[view] [source] 2024-05-17 16:45:39
>>reduce+Zb
People can believe whatever they like. It doesn't make them right.

The flaw is in your quote: there is no "super-intelligent AI". We don't have AGI, and given they were coming out with this a few years ago (GPT2?) it's laughable.

They're getting way ahead of themselves.

[go to top]