zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. adamsm+r61[view] [source] 2023-03-01 16:30:49
>>mellos+pe
>This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

The thing is, it's probably a good thing. The "Open" part of OpenAI was always an almost suicidal bad mistake. The point of starting the company was to try and reduce p(doom) of creating an AGI and it's almost certain that the more people have access to powerful and potentially dangerous tech that p(doom) increases. I think OpenAI is one of the most dangerous organizations on the planet and being more closed reduces that danger slightly.

◧◩◪
3. fragsw+291[view] [source] 2023-03-01 16:39:34
>>adamsm+r61
I'm curious what you think makes them dangerous?
◧◩◪◨
4. adamsm+Dh1[view] [source] 2023-03-01 17:10:39
>>fragsw+291
They are releasing powerful AI tools at an alarming rate before Safety, and I mean real Safety, Researchers have a chance to understand their implication. They are generating an enormous amount of buzz and hype which is fueling a coming AI arms race that is extremely dangerous. The Control Problem is very real and becoming more pressing as things accelerate. Sam has recently given lip service to caring about the problem but OpenAI's actions seem to indicate it's not a major priority. There was a hint that they cared when they thought GPT-2 was too dangerous to release publicly but at this point if they were serious about safety no model past ChatGPT and Bing would be released to the public at all, full stop.

https://openai.com/blog/planning-for-agi-and-beyond/

Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

◧◩◪◨⬒
5. Fillig+ix1[view] [source] 2023-03-01 18:03:23
>>adamsm+Dh1
> Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.

I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...

[go to top]