zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. jvande+P4[view] [source] 2024-05-17 15:45:53
>>robbie+(OP)
Honestly, having a "Long term AI risk" team is a great idea for an early stage startup claiming to build General AI. It looks like they are taking the mission and risks seriously.

But for a product-focused LLM shop trying to infuse into everything, it makes sense to tone down the hype.

◧◩
2. nprate+J9[view] [source] 2024-05-17 16:14:37
>>jvande+P4
It makes it look like the tech is so rad it's dangerous. Total bollocks, but great marketing.
◧◩◪
3. reduce+Zb[view] [source] 2024-05-17 16:27:35
>>nprate+J9
Ilya and Jan Leike[0] resigned (were fired) because they believed their jobs were a temporary marketing expense? Or maybe you think you understand the risks of AGI better than them, the creators of the frontier models?

Do you think this is a coherent world view? Compared to the other one staring you in the face? I'll leave it to the reader whether they want to believe this conspiratorial take in line with profit-motive instead of the scientists saying:

“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

[0] https://scholar.google.co.uk/citations?user=beiWcokAAAAJ&hl=...

◧◩◪◨
4. tim333+GV[view] [source] 2024-05-17 21:34:48
>>reduce+Zb
>“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

We could always stop paying for the servers, or their electricity.

I think we'll have AGI soon but it won't be that much threat to the world.

[go to top]