zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. jvande+P4[view] [source] 2024-05-17 15:45:53
>>robbie+(OP)
Honestly, having a "Long term AI risk" team is a great idea for an early stage startup claiming to build General AI. It looks like they are taking the mission and risks seriously.

But for a product-focused LLM shop trying to infuse into everything, it makes sense to tone down the hype.

◧◩
2. nprate+J9[view] [source] 2024-05-17 16:14:37
>>jvande+P4
It makes it look like the tech is so rad it's dangerous. Total bollocks, but great marketing.
◧◩◪
3. reduce+Zb[view] [source] 2024-05-17 16:27:35
>>nprate+J9
Ilya and Jan Leike[0] resigned (were fired) because they believed their jobs were a temporary marketing expense? Or maybe you think you understand the risks of AGI better than them, the creators of the frontier models?

Do you think this is a coherent world view? Compared to the other one staring you in the face? I'll leave it to the reader whether they want to believe this conspiratorial take in line with profit-motive instead of the scientists saying:

“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

[0] https://scholar.google.co.uk/citations?user=beiWcokAAAAJ&hl=...

◧◩◪◨
4. jvande+Kn[view] [source] 2024-05-17 17:30:28
>>reduce+Zb
They resigned (or were fired) because the business no longer needs their unit, which puts a damper on their impact and usefulness. It also makes them a cost center in a business that is striving to become profitable.

That is the simplest explanation, it's a tale as old as time. And is fundamentally explained by a very plausible pivot from "World changing general purpose AI - believe me it's real" to "world changing LLM integration and innovation shop".

[go to top]