zlacker

[parent] [thread] 0 comments
1. imiric+(OP)[view] [source] 2023-07-06 08:29:42
So essentially "we're going to build a new AI to oversee our other AIs". But then who oversees the overseer?

This is not a problem OpenAI can solve in isolation, so this reads more like a marketing piece. The grave danger with AI for humanity is not a Skynet AI-gone-rogue scenario, but in humans doing evil things to other humans using AI, which is inevitable. The genie is out of the bottle now, and it's only a matter of time for these systems to be exploited.

I expect nothing major will happen in the next decade or two, besides an increase in mis/disinformation flooding our communication channels, with the negative effects caused by that we've already seen in the past decade. But once AI systems are deeply embedded in the machinery that makes modern society function (power, transport, finance, military, etc.), it only takes one rogue human actor to press the switch that causes chaos. It will be like the nuclear threat, but on a much larger scale, with many more variables and humans involved. It's hard not being pessimistic about such a scenario.

Sure, we'll have mechanisms in place that try to deter that from happening, but since we're unable to overcome our tribal nature, there's no doubt that AI will be weaponized as well.

[go to top]