zlacker

[parent] [thread] 1 comments
1. iterat+(OP)[view] [source] 2023-11-20 13:36:31
So we have OpenAI, Microsoft, a whole bunch of capital, and a few "rock stars" moving. And it's these people holding the keys to the AI kingdom where they go to work to achieve AGI.

Finally they got rid of this pesky idea of "safety". We're back in "break things" mode.

Does nobody recognize the stakes here? AGI, which soon would accelerate into something far more capable, ends civilization. I'm not saying it would kill us, I'm saying it makes us cognitively obsolete and all meaning is lost.

AI Safety isn't a micro bias in the training set. It's existential at planetary scale. Yet we let a bunch of cowboys just go "let's see what happens" with zero meaningful regulation in sight. And we applaud them.

I know AGI isn't here yet. I know Microsoft would not allow for zero safety. I'm just saying that on the road to AGI, about two dozen people are deciding on our collective faith. With as ultimate chief the guy behind shit coin "world coin".

replies(1): >>bbu+h6
2. bbu+h6[view] [source] 2023-11-20 14:00:10
>>iterat+(OP)
if AGI is as close as autonomous cars, I think we are going to be ok.
[go to top]