zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. yashap+2J1[view] [source] 2024-05-18 15:22:57
>>fnbr+(OP)
For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.

There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.

◧◩
2. root_a+yP1[view] [source] 2024-05-18 16:26:22
>>yashap+2J1
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
◧◩◪
3. MrScru+eS1[view] [source] 2024-05-18 16:56:53
>>root_a+yP1
Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.
[go to top]