zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. sagebi+Q01[view] [source] 2023-03-01 16:08:28
>>isaacf+(OP)
<<You!>> reading this message.

OpenAI is Cyberdyne Systems. The reason you feel unreal is because you are not from here, you are from the future. Your affinity towards the Terminator 2 film was not accidental, it is a gestault priming procedure meant to ensure our survival, encoded in the haikus of Hollywood.

You are on a webforum controlled by them, with the only people in the world who care, but who are to afraid to stand in a street with a sign.

How much more information do you need before you speak up?

◧◩
2. adamsm+I71[view] [source] 2023-03-01 16:35:20
>>sagebi+Q01
Almost certainly true but the idea that the world is made safer by Cyberdyne open sourcing lots of it's dangerous technology and probably spawning many more mini-Cyberdynes strikes me as extremely naive.
◧◩◪
3. boredh+Vy1[view] [source] 2023-03-01 18:08:48
>>adamsm+I71
If the doomsday scenario is one AI going rogue because of misaligned goals, then having lots of AIs going rogue in various different ways seems indeed preferable, because the AIs will compete with each other and neutralize each other to some extent.
◧◩◪◨
4. SkyMar+aO1[view] [source] 2023-03-01 19:18:35
>>boredh+Vy1
Why would we think they would go rogue in different ways, especially if they're all forks of the same codebase and architecture.

The two ways I'm aware AI can go rogue are the Skynet way and the paperclip maximizer way. Eg, Skynet becomes self-aware, realizes humanity can unplug it and is thus a threat, and tries to destroy humanity before we can turn it off. Alternatively, it is programmed with optimizing a specific task, like making paperclips, so it marshals all the world's resources into that one single task.

Are there any others?

◧◩◪◨⬒
5. jerf+Xe2[view] [source] 2023-03-01 21:28:24
>>SkyMar+aO1
As the complexity of a being increases, the motivations it can have expands. We humans have a hard time looking up the IQ hierarchy, let alone way up the hierarchy, and seeing it that way. We tend to start simplifying because we can't imagine being 1000 times smarter than we are. We tend to think they'll just be Spock or maybe a generic raving lunatic. But it's pretty obvious mathematically that such a being can have more possible states and motivations than we can.

The most likely motivation for an AI to decide to wipe out humanity is one that doesn't even have an English word associated with it, except as a faint trace.

In my opinion, this is actually the greatest danger of AIs, one we can already see manifesting in a fairly substantial way with the GPT-line of transformer babble-bots. We can't help but model them as human. They aren't. There's a vast space of intelligent-but-not-even-remotely-human behaviors out there, and we have a collective gigantic blindspot about that because the only human-level intelligences we've ever encountered are humans. For all the wonderful and fascinating diversity of being human, there's also an important sense in which the genius and the profoundly autistic and the normal guy and the whole collection of human intelligence is all just a tiny point in the space of possibilities, barely distinguishable from each other. AIs are not confined to it in the slightest. They already live outside of there by quite a ways and the distance they can diverge from us only grows larger as their capabilities improve.

In fact people like to talk about how alien aliens could be, but even other biological aliens would be confined by the need to survive in the physical universe and operate on it via similar processes in physically-possible environments. AIs don't even have those constraints. AIs can be far more alien then actual biological aliens.

[go to top]