zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. yashap+2J1[view] [source] 2024-05-18 15:22:57
>>fnbr+(OP)
For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.

There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.

◧◩
2. root_a+yP1[view] [source] 2024-05-18 16:26:22
>>yashap+2J1
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
◧◩◪
3. MrScru+eS1[view] [source] 2024-05-18 16:56:53
>>root_a+yP1
Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.
◧◩◪◨
4. timr+xV1[view] [source] 2024-05-18 17:34:40
>>MrScru+eS1
The same way that the expectation of many people working within the self-driving field in 2016 was that level 5 autonomy was right around the corner.

Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).

◧◩◪◨⬒
5. huevos+S32[view] [source] 2024-05-18 18:42:04
>>timr+xV1
While I agree with your point, I take self driving rides on a weekly basis and you see them all over SF nowadays.

We overestimate the short term progress, but underestimate the medium, long term one.

◧◩◪◨⬒⬓
6. timr+L62[view] [source] 2024-05-18 19:07:38
>>huevos+S32
I don't think we disagree, but I will say that "a handful of people in SF and AZ taking rides in cars that are remotely monitored 24/7" is not the drivers-are-obsolete-now, near-term future being promised in 2016. Remember the panic because long-haul truckers were going to be unemployed Real Soon Now? I do.

Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.

[go to top]