zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. thorum+Bu[view] [source] 2024-05-17 23:10:57
>>fnbr+(OP)
Extra respect is due to Jan Leike, then:

https://x.com/janleike/status/1791498174659715494

◧◩
2. a_wild+Xv[view] [source] 2024-05-17 23:24:41
>>thorum+Bu
I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.

◧◩◪
3. refulg+rw[view] [source] 2024-05-17 23:29:49
>>a_wild+Xv
Adding a disclaimer for people unaware of context (I feel same as you):

OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.

Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

◧◩◪◨
4. jasonf+by[view] [source] 2024-05-17 23:45:35
>>refulg+rw
To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.

◧◩◪◨⬒
5. NomDeP+jA[view] [source] 2024-05-18 00:07:12
>>jasonf+by
As a child I used to watch a TV programme called Tomorrows World. On it they predicted these very same things in similar timeframes.

That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.

◧◩◪◨⬒⬓
6. Davidz+mF[view] [source] 2024-05-18 01:03:48
>>NomDeP+jA
are we living in the same world?????
◧◩◪◨⬒⬓⬔
7. NomDeP+581[view] [source] 2024-05-18 08:51:04
>>Davidz+mF
I would assume so. I've spent some time looking into AI for software development and general use and I'm both slightly impressed and at the same time don't really get the hype.

It's better and quicker search at present for the area I specialise in.

It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.

What's your take?

[go to top]