zlacker

[parent] [thread] 8 comments
1. thepas+(OP)[view] [source] 2023-11-18 03:26:36
This is genuinely frustrating.

IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.

I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.

Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.

replies(7): >>oivey+s >>EMIREL+w >>padols+73 >>huyter+15 >>crotch+d6 >>kcb+km >>DonHop+Up
2. oivey+s[view] [source] 2023-11-18 03:29:43
>>thepas+(OP)
What is Sam Altman going to do with a GPU? Find some engineers to use them I guess?
3. EMIREL+w[view] [source] 2023-11-18 03:30:34
>>thepas+(OP)
From what I understand, the board doesn't want "AI safety" to be the core or even a major driving force. The whole contention sprung about because of sama's way of running the company ("ClosedAI", for-profit) at odds with the non-profit charter and overall spirit of the board and many people working there.
4. padols+73[view] [source] 2023-11-18 03:50:34
>>thepas+(OP)
Yeh I really wish they'd better articulate the "AI safety" directive in a way that is broader than deepfakes and nuclear/chemical winter. It feels like an easy sell to regulators. Meanwhile most of us are creating charming little apps that make day-to-day lives easier and info more accessible. The hand-wavey moral panic is a bit of a tired trope in tech.

Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.

The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.

replies(1): >>Walter+75
5. huyter+15[view] [source] 2023-11-18 04:06:06
>>thepas+(OP)
Palmer had some ridiculous perspectives. Don’t put that pos in the same bucket as Sam.
◧◩
6. Walter+75[view] [source] [discussion] 2023-11-18 04:06:50
>>padols+73
AI will decide our fate in a microsecond.
7. crotch+d6[view] [source] 2023-11-18 04:15:07
>>thepas+(OP)
> Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

If that were true, Palmer Lucky wouldn't spend all his time ranting on twitter about how he was so easily hoodwinked by the community of a particular linux distribution / functional programming language.

8. kcb+km[view] [source] 2023-11-18 06:14:38
>>thepas+(OP)
Convincing yourself and others that you're developing this thing that could destroy humanity if you personally slip up and are just not careful enough makes you feel really powerful.
9. DonHop+Up[view] [source] 2023-11-18 06:52:01
>>thepas+(OP)
Just because bigotry and misogyny and racism are views shared by half the country doesn't make them right.
[go to top]