zlacker

[parent] [thread] 4 comments
1. DebtDe+(OP)[view] [source] 2023-11-20 14:52:20
Board will be ousted, but the ship has sailed on Sam and Greg coming back.
replies(1): >>voittv+2D
2. voittv+2D[view] [source] 2023-11-20 17:53:32
>>DebtDe+(OP)
I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.

Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.

replies(3): >>Applej+qJ >>MVisse+tR >>moogly+OU
◧◩
3. Applej+qJ[view] [source] [discussion] 2023-11-20 18:15:22
>>voittv+2D
Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.

My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )

Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.

In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.

The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.

Rosko's Basilisk is a sci-fi hypothetical.

Altman's Basilisk, if that's what happened, is a panic reaction.

I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.

It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.

If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)

◧◩
4. MVisse+tR[view] [source] [discussion] 2023-11-20 18:43:17
>>voittv+2D
As long as compute keeps increasing, model size and performance can keep increasing.

So no, we’re nowhere near max capability.

◧◩
5. moogly+OU[view] [source] [discussion] 2023-11-20 18:54:36
>>voittv+2D
Everyone? Inevitable? Maybe on the time scale of a 1000 years.
[go to top]