zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. thepas+k6[view] [source] 2023-11-18 03:26:36
>>apsec1+(OP)
This is genuinely frustrating.

IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.

I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.

Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.

◧◩
2. padols+r9[view] [source] 2023-11-18 03:50:34
>>thepas+k6
Yeh I really wish they'd better articulate the "AI safety" directive in a way that is broader than deepfakes and nuclear/chemical winter. It feels like an easy sell to regulators. Meanwhile most of us are creating charming little apps that make day-to-day lives easier and info more accessible. The hand-wavey moral panic is a bit of a tired trope in tech.

Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.

The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.

◧◩◪
3. Walter+rb[view] [source] 2023-11-18 04:06:50
>>padols+r9
AI will decide our fate in a microsecond.
[go to top]