zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. yashap+2J1[view] [source] 2024-05-18 15:22:57
>>fnbr+(OP)
For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.

There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.

◧◩
2. root_a+yP1[view] [source] 2024-05-18 16:26:22
>>yashap+2J1
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
◧◩◪
3. dclowd+822[view] [source] 2024-05-18 18:24:58
>>root_a+yP1
Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.

I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.

◧◩◪◨
4. root_a+Cl2[view] [source] 2024-05-18 21:12:16
>>dclowd+822
This isn't a question of understanding the brain. We don't even have a theory of AGI, the idea that LLMs are somehow anywhere near even approaching an existential threat to humanity is science fiction.

LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.

[go to top]