zlacker

Leaked OpenAI documents reveal aggressive tactics toward former employees

submitted by apengw+(OP) on 2024-05-22 22:22:30 | 1791 points 519 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
12. madeof+l4[view] [source] [discussion] 2024-05-22 22:46:58
>>tedivm+W2
For whatever it's worth (not much), Sam Altman did say they would do that

> if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.

https://x.com/sama/status/1791936857594581428

◧◩
64. mturmo+rb[view] [source] [discussion] 2024-05-22 23:24:52
>>tsunam+I6
Since the beginning:

https://en.wikipedia.org/wiki/Traitorous_eight#Frictions

◧◩◪◨
69. poszle+Ob[view] [source] [discussion] 2024-05-22 23:26:36
>>ra7+I9
I think you are correct. It's a status symbol at this point. It's the digital version of "inconspicuous consumption" [1].

[1] https://www.theatlantic.com/magazine/archive/2008/07/inconsp...

◧◩◪◨
70. thfura+Xb[view] [source] [discussion] 2024-05-22 23:27:23
>>squigz+Ja
The legalese to handle that is one of the standard boilerplate clauses: https://en.m.wikipedia.org/wiki/Severability
75. tomcam+oc[view] [source] 2024-05-22 23:30:03
>>apengw+(OP)
From Sam Altman:

> this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

Bullshit. Presumably Sam Altman has 20 IQ points on me. He obviously knows better. I was a CEO for 25 years and no contract was issued without my knowing every element in it. In fact, I had them all written by lawyers in plain English, resorting to all caps and legal boilerplate only when it was deemed necessary.

For every house, business, or other major asset I sold if there were 1 or more legal documents associated with the transaction I read them all, every time. When I go to the doctor and they have a privacy or HIPAA form, I read those too. Everything the kids' schools sent to me for signing--read those as well.

He lies. And if he doesn't... then he is being libeled right and left by his sister.

https://twitter.com/anniealtman108

91. istjoh+Od[view] [source] 2024-05-22 23:37:15
>>apengw+(OP)
I wonder if this HN post will get torpedoed as fast as the one from yesterday[0].

0. >>40435440

◧◩◪◨
93. Terr_+0e[view] [source] [discussion] 2024-05-22 23:38:31
>>tedivm+u6
Hold up... Do you really think that a C-suite including career venture-capitalists who happen to be leading+owning stock in a private startup which has hit an estimated billion+ valuation are too naive/distracted to be involved in how that stock is used to retain employees?

In other words, I'm pretty sure the Ed Dillingers are already in charge, not Walter Gibbs garage-tinkerers. [0]

[0] https://www.youtube.com/watch?v=atmQjQjoZCQ

95. adt+6e[view] [source] 2024-05-22 23:39:01
>>apengw+(OP)
PDF:

https://s3.documentcloud.org/documents/24679729/aestas_reduc...

◧◩
100. reaper+Qe[view] [source] [discussion] 2024-05-22 23:43:32
>>tsunam+I6
Seems to pre-date semi-intelligent AI agents: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...
◧◩
112. jay-ba+Fg[view] [source] [discussion] 2024-05-22 23:54:32
>>tomcam+oc
> He lies. And if he doesn't... then he is being libeled right and left by his sister.

>

> https://twitter.com/anniealtman108

You know, it’s always heartbreaking to me seeing family issues spill out in public, especially on the internet. If the things Sam’s sister says about him are all true, then he’s, at the very minimum, an awful brother, but honestly, a lot of it comes across as a bitter or jealous sibling…really sad though.

◧◩
186. rachof+as[view] [source] [discussion] 2024-05-23 01:10:11
>>JCM9+pg
It's the correct counter-strategy to people who believe that you shouldn't attribute to malice what could be attributed to stupidity (and who don't update that prior for their history with a particular actor).

And it works in part because things often are accidents - enough to give plausible deniability and room to interpret things favorably if you want to. I've seen this from the inside. Here are two HN threads about times my previous company was exposing (or was planning to expose) data users didn't want us to: [1] [2]

Without reading our responses in the comments, can you tell which one was deliberate and which one wasn't? It's not easy to tell with the information you have available from the outside. The comments and eventual resolutions might tell you, but the initial apparent act won't. (For the record, [1] was deliberate and [2] was not.)

[1] >>23279837

[2] >>31769601

◧◩◪◨⬒
204. speff+kt[view] [source] [discussion] 2024-05-23 01:21:39
>>lobste+5d
15y account here too - also quit. Tried lemmy for a while and didn't like it. At least it helped me kick the reddit habit. Don't even go there anymore

https://old.reddit.com/u/speff

◧◩◪◨
225. 837204+Ew[view] [source] [discussion] 2024-05-23 01:49:00
>>suroot+Mn
This or something else?

>>9593177

◧◩◪◨⬒
229. HaZeus+ax[view] [source] [discussion] 2024-05-23 01:53:09
>>lobste+Kd
Gives off "if I sound pleased about this, it's because my programmers made this my default tone of voice! I'm actually quite depressed! :D" [1] vibes

1 - https://www.youtube.com/watch?v=oGnwMre07vQ

◧◩◪
248. abrich+wA[view] [source] [discussion] 2024-05-23 02:23:48
>>rachof+as
> you shouldn't attribute to malice what could be attributed to stupidity

It's worth noting that Hanlon’s razor was not originally intended to be interpreted as a philosophical aphorism in the same way as Occam’s:

> The term ‘Hanlon’s Razor’ and its accompanying phrase originally came from an individual named Robert. J. Hanlon from Scranton, Pennsylvania as a submission for a book of jokes and aphorisms, published in 1980 by Arthur Bloch.

https://thedecisionlab.com/reference-guide/philosophy/hanlon...

Hopefully we can collectively begin to put this notion to rest.

◧◩
249. chilli+VA[view] [source] [discussion] 2024-05-23 02:26:23
>>mateus+x1
OpenAI is deeply sorry.

https://www.youtube.com/watch?v=15HTd4Um1m4

◧◩
272. treme+hD[view] [source] [discussion] 2024-05-23 02:49:12
>>istjoh+Od
PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://paulgraham.com/5founders.html

◧◩◪◨⬒⬓
273. lotsof+vD[view] [source] [discussion] 2024-05-23 02:50:33
>>whitej+2u
The federal government banned non competes last month:

https://www.ftc.gov/news-events/news/press-releases/2024/04/...

307. treme+4L[view] [source] 2024-05-23 04:04:24
>>apengw+(OP)
PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://p@ulgraham.com/5founders.html *edited link due to first post getting deleted

◧◩
341. whymau+2P[view] [source] [discussion] 2024-05-23 04:51:07
>>manlob+CK
Here's an explanation:

https://www.levels.fyi/blog/openai-compensation.html

◧◩◪◨
360. heavys+sT[view] [source] [discussion] 2024-05-23 05:41:20
>>Always+9N
https://en.wikipedia.org/wiki/Effective_accelerationism
◧◩
368. dang+kV[view] [source] [discussion] 2024-05-23 05:59:18
>>istjoh+Od
I didn't see that comment but I did post >>40437018 elsewhere in that thread, which addresses the same concerns. If anyone reads that and still has a concern, I'd be happy to take a crack at answering further.

The short version is that users flagged that one plus it set off the flamewar detector, and we didn't turn the penalties off because the post didn't contain significant new information (SNI), which is the test we apply (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). Current post does contain SNI so it's still high on HN's front page.

Why do we do it this way? Not to protect any organization (including YC itself, and certainly including OpenAI or any other BigCo), but simply to avoid repetition. Repetition is the opposite of intellectual curiosity (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...), which is what we're hoping to optimize for (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...).

I hesitate to say "it's as simple as that" because HN is a complicated beast and there are always other factors, but...it's kind of as simple as that.

371. kashya+DV[view] [source] 2024-05-23 06:02:15
>>apengw+(OP)
Great, if these documents are credible, this is exactly what I was implying[1] yesteday. Here, listen to Altman say how he is "genuinely embarrassed":

"this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have."

The first thing the above conjures up is the other disgraced Sam (Bankman-Fried) saying "this is on me" when FTX went bust. I bet euros-to-croissants I'm not the only one to notice this.

Some amount of corporate ruthlessness is part of the game, whether we like it or not. But these SV robber-barrons really crank it up to something else.

[1] >>40425735

◧◩◪◨
375. dang+ZV[view] [source] [discussion] 2024-05-23 06:07:40
>>loceng+Kk
It's a common suggestion but I don't think it would work and have posted quite a few times about why: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
◧◩◪
385. dang+DX[view] [source] [discussion] 2024-05-23 06:21:35
>>nwoli+wn
It's standard moderation on HN to downweight subthreads where the root comment is snarky, unsubstantive, or predictable. Most especially when it is unsubstantive + indignant. This is the most important thing we've figured out about improving thread quality in the last 10 years.

But it doesn't vary based on specific persons (not Sam or anyone else). Substantive criticism is fine, but predictable one-liners and that sort of thing are not what we want here—especially since they evoke even worse from others.

The idea of HN is to have an internet forum—to the extent possible—where discussion remains intellectually interesting. The kind of comments we're talking about tend to choke all of that out, so downweighting them is very much in HN's critical path.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

◧◩◪◨
404. ayewo+b41[view] [source] [discussion] 2024-05-23 07:18:27
>>suroot+Mn
Not sure what you are trying to get across.

This is the final comment [1] that got Michael’s account banned.

You can see dang’s reply [2] directly underneath his which says:

> We've banned this account.

1: >>10017538

2: >>10019003

◧◩◪
408. kashya+a61[view] [source] [discussion] 2024-05-23 07:35:51
>>aswegs+a41
No, I don't. That why I put it in "scare quotes". You wouldn't get that impression had you read my comment I linked above :) — >>40425735

I was trying to be a bit restrained in my criticism; otherwise, it gets too repetitive.

411. redbel+0a1[view] [source] 2024-05-23 08:04:43
>>apengw+(OP)
The amount [and scale] of practices, chaos and controversies caused by OpenAI since ChatGPT was released are "on par" with the powerful products it has built since.. in a negative way!

These are the hottest controversial events so far, in a chronological order:

  OpenAI's deviation from its original mission (https://news.ycombinator.com/item?id=34979981).
  The Altman's Saga (https://news.ycombinator.com/item?id=38309611).
  The return of Altman (within a week) (https://news.ycombinator.com/item?id=38375239).
  Musk vs. OpenAI (https://news.ycombinator.com/item?id=39559966). 
  The departure of high-profile employees (Karpathy: https://news.ycombinator.com/item?id=39365935 ,Sutskever: https://news.ycombinator.com/item?id=40361128).
  "Why can’t former OpenAI employees talk?" (https://news.ycombinator.com/item?id=40393121).
◧◩◪
437. r721+xt1[view] [source] [discussion] 2024-05-23 10:56:50
>>user_7+ei
This post and discussion from 2013 might interest you: >>6799854
◧◩◪◨⬒⬓
443. tomp+qA1[view] [source] [discussion] 2024-05-23 11:51:22
>>whitej+2u
No, you're confusing stuff.

First of all, taking any code with you is theft, and you go to jail, like this poor Goldman Sachs programmer [1]. This will happen even if the code has no alpha.

However, noone can prevent you from taking knowledge (i.e. your memories), so reimplementing alpha elsewhere is fine. Of course, the best alpha is that which cannot simply be replicated, e.g. it depends on proprietary datasets, proprietary hardware (e.g. fast links between exchanges), access to cheap capital, etc.

What hedge funds used to do, is give you lengthy non-competes. 6months for junior staff, 1-2y for traders, 3y+ in case of Renaissance Technologies.

In the US, that's now illegal and un-enforceable. So what hedge funds do now, is lengthy garden(ing) leaves. This means you still work for the company, you still earn a salary, and in some (many? all?) cases also the bonus. But you don't go to the office, you can't access any code, you don't see any trades. The company "moves on" (developes/refines its alpha, including your alpha - alpha you created) and you don't.

These lengthy garden leaves replaced non-competes, so they're now 1y+. AFAIK they are enforceable, just as non-competes while being employed always have been.

[1] https://nypost.com/2018/10/23/ex-goldman-programmer-sentence...

◧◩
467. gen220+712[view] [source] [discussion] 2024-05-23 14:29:49
>>kashya+DV
Patrick Collison interviewed Sam Altman in May 2023 [1]

In the intro, Patrick goes off-script to make a joke about how last year he'd interviewed SBF, which was "clearly the wrong Sam".

I'm eagerly waiting for 2025, when he interviews some new Sam and is able to recycle the joke. :)

[1]: https://www.youtube.com/watch?v=1egAKCKPKCk

◧◩
470. Wurdan+Y32[view] [source] [discussion] 2024-05-23 14:43:41
>>redbel+0a1
I'd say you can add the time Altman threatened to pull OpenAI out of the EU if its regulation wasn't to his liking https://www.reuters.com/technology/openai-may-leave-eu-if-re...
◧◩
475. notshi+ma2[view] [source] [discussion] 2024-05-23 15:16:24
>>notshi+L01
https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...
◧◩
493. tim333+oa3[view] [source] [discussion] 2024-05-23 20:36:33
>>redbel+0a1
And of course Scarlett which is beating Musk in votes and comments (>>40421225 )
◧◩◪◨⬒
518. dang+Ukf[view] [source] [discussion] 2024-05-28 22:43:25
>>CRConr+qEd
> seemingly always just so happen to have the outcome they have.

The key word there is "seemingly". You notice what you're biased to notice and generalize based on that. People with diferent views notice different things and make different generalizations.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

◧◩
519. CRConr+ehl[view] [source] [discussion] 2024-05-30 22:03:34
>>redbel+0a1
Parent comment's links, but (hopefully) clickable.

OpenAI's deviation from its original mission - >>34979981

The Altman's Saga – >>38309611 ).

The return of Altman (within a week) ->>38375239

Musk vs. OpenAI - >>39559966

The departure of high-profile employees -

Karpathy: >>39365935

Sutskever: >>40361128

"Why can’t former OpenAI employees talk?" - >>40393121

[go to top]