It's really easy to make people whole for this, so whether that happens or not is the difference between the apologies being real or just them just backpedaling because employees got upset.
Edit: Looks like they're doing the right thing here:
> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.
You have to be really attuned to "is this actually rational or sound right, or am I adding in an implicit 'but we're good people, so,'"
Looks like they’re doing that.
Obviously that should not be possible any more with these leaked documents, given they prove both the existence of the scheme and Altman and other senior leadership knowing about it. Maybe they thought that since they'd already gagged the ex-employees, nobody would dare leak the evidence?
> if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.
>Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about.
>OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.
>Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.
"...But there's a problem with those apologies from company leadership. Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about..."
About 5 months ago I had a chance to join a company, their company had what looked like an extreme non-compete to me, you couldn't work for any company for the next two years after leaving if they had been a customer of that company.
I pointed out to them that I wouldn't have been able to join their company if my previous job had that non-compete clause, it seemed excessive. Eventually I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it, and the FTC is about to end non-competes. I said great, strike it from the contract and I'll sign it right now. He said I can't do that, no one off contracts. So then I said I'm not working there.
Let's say I find a profitable niche while working for a project and we decide to open a separate spin off startup to handle that idea. I'd expect legality to be handled for me, inherited from the parent company.
Now let's also say the company turns out to be disproportionately successful. I'd say I would have a lot on my plate to worry about, the least of which the legal part that the company inherited.
In this scenario it is probable that hostile clauses in contracts would be dug up. I surely would be legally responsible for them, but how much would I be to blame for them, truly?
And if the company handles the incident well, how important should that blame putting be?
And the employees also have way more leverage than Reddit users; at this point they should still be OpenAI's greatest asset. Even once this is fixed (which they obviously will do, given they got caught), it's still going to cause a major loss of trust in the entire leadership.
It accelerated rapidly with some trends like the Tea Party, Gamergate, Brexit, Andrew Wakefield, covid antivax, and the Ukraine situation, and is in evidence on both sides of the trans rights debate, in doxxing, in almost every single argument on X that goes past ten tweets, etc.
It's something many on the left have generally identified as worse from the right wing or alt.right.
But this is just because it's easier to categorise it when it's pointing at you. It's actually the primary toxicity of all argument in the 21st century.
And the reason is that weaponised bad faith is addictive fun for the operator.
Basically everyone gets to be Lee Atwater or Roger Stone for a bit, and everyone loves it.
I recently received an job recruitment email for an AI role in all-lowercase and I was baffled how to interpret it.
Where is all this hatred coming from?
It looks like it was written in a sloppy way and nobody actually proofread it.
I think Sergey Brin used to do the same thing (or maybe it was Larry Page). I remember reading that in some google court case emails and thinking, the show Silicon Valley wasn't even remotely exaggerating.
im busy running a billion dollar company i dont have time for this
He can’t be trusted, and as a result OpenAI cannot be trusted.
That's like P.Diddy saying I'm sorry.
That's damage control for being caught doing something bad ... again.
I know extremely desirable researchers who refuse to work for Elon because of how he has historically treated employees. Repeated issues like this will slowly add OpenAI to that list for more people.
Add a grade in red at the top if you're feeling extra cheeky
It reads like omertà.
I wonder if I'll still get downvoted for saying this. A lot can change in 24 hours.
Edit: haha :-P
Just as Reddit users stay on Reddit because there is nowhere else to go, the reality is that everyone worships leadership because they keep their paychecks flowing.
I suppose there's probably a bunch of legalese to prevent that though...
I've seen equity clawbacks in employment agreements. Specifically, some of the contracts I've signed have said that if I'm fired for cause (and were a bit more specific, like financial fraud or something) then I'd lose my vested equity. That isn't uncommon, but its not typically used to silence people and is part of the agreement they review and approve of before becoming an employee. It's not a surprise that they learn about as they try to leave.
The relevant stakeholders here are the potential future employees, who are seeing in public exactly how OpenAI treats its employees.
Never seen anything that says money or equity you've already earned could be clawed back.
But no. The MBAs saw dollar signs, and everything went out the window. They fumbled the early mover advantage, and will be hollowed out by the competition and commodified by the PaaS giants. What a shame.
[1] https://www.theatlantic.com/magazine/archive/2008/07/inconsp...
Going to all-lowercase is harder on the reader, and thus is disrespectful of the reader. I will die on this hill.
Never negotiated on exit.
> this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.
Bullshit. Presumably Sam Altman has 20 IQ points on me. He obviously knows better. I was a CEO for 25 years and no contract was issued without my knowing every element in it. In fact, I had them all written by lawyers in plain English, resorting to all caps and legal boilerplate only when it was deemed necessary.
For every house, business, or other major asset I sold if there were 1 or more legal documents associated with the transaction I read them all, every time. When I go to the doctor and they have a privacy or HIPAA form, I read those too. Everything the kids' schools sent to me for signing--read those as well.
He lies. And if he doesn't... then he is being libeled right and left by his sister.
They wish. Napster is a more apt analogy.
Either way, I can imagine a subtext of "step forward and get a target on your back."
"...and that our PR firm wasn't good enough to squash the story."
They will follow the standard corporate 'disaster recovery' - say something to make it look like they're addressing it, then do nothing and just wait for it to fall out of the news cycle.
Changes like that are hard to measure.
I didn’t post about not engaging with or using the platform anymore. Nor did I delete my account, since it still holds some value to me. But I slinked away into the darkness and now HN is my social media tool.
And a bunch of not-well-informed employees who didn't understand the consequences of this clause when they originally signed
I've read your posts for years on HN, don't undersell yourself.
Many CEOs don't know what is their company's contracts, nor do they think about it. While it is laudable that you paid such close attention, the fact is I've met many leaders who have no clue what is in their company's employment paperwork.
I HOPE YOU ARE HAVING A NICE DAY.
0. >>40435440
That sounds like a really bad idea for many many reasons. Lawyers are cheap compared to losing control, or even your stake, to legal shenanigans.
In other words, I'm pretty sure the Ed Dillingers are already in charge, not Walter Gibbs garage-tinkerers. [0]
I'm not surprised that they're rapidly backpedaling.
He’s not exactly new to this whole startup thing and getting equity right is not a small part of that
No doubt, openai is as vacuous as their product is effect. GIGO.
It depends a bit by what you mean by left and right, but if you take something like Marxism that was always 100% a propaganda effort created by people who owned newspapers and the pervasiveness of propaganda has been a through line e.g. in the Soviet Union, agitprop etc. A big part of the Marxist theory is that there is no reality, that social experience completely determines everything, and that sort of ideology naturally lends itself to the belief that blankets of bad faith arguments for "good causes" are a positive good.
This sort of thinking was unpopular on the left for many years, but it's become more hip no doubt thanks to countries like Russia and China trying to re-popularize communism in the West.
(I typed this from my phone)
I think perhaps I didn't really make it totally clear that what I'm mostly talking about is a bit closer to the personal level -- the way people fight their corners, the way twitter level debate works, the way local politicians behave. The individual, ghastly shamelessness of it, more than the organised wall of lies.
Everyone getting to play Roger Stone.
Not so much broadcast bad faith as narrowcast.
I get the impression Stalinism was more like this -- you know, you have your petty level of power and you _lie_ to your superiors to maintain it, but you use weaponised bad faith to those you have power over.
It's a kind of emotional cruelty, to lie to people in ways they know are lies, that make them do things they know are wrong, and to make it obvious you don't care. And we see this everywhere now.
It’s becoming too much to just be honest oversights.
>
> https://twitter.com/anniealtman108
You know, it’s always heartbreaking to me seeing family issues spill out in public, especially on the internet. If the things Sam’s sister says about him are all true, then he’s, at the very minimum, an awful brother, but honestly, a lot of it comes across as a bitter or jealous sibling…really sad though.
If you are ever going to sign an employee agreement that binds you, consult with an employment attorney first. I did this with a past noncompete and it was the best few hundred I ever spent: my attorney talked with me for an hour about the particulars of my noncompete, pointed out areas to negotiate, and sent back redlines to make the contract more equitable.
However such signalling is harder to pull off than it seems, and most who try do it poorly because they don’t realise that the casual aesthetic isn't just a lack of care. Steve Jobs famously eschewed the suit for jeans and mock turtleneck. But those weren’t really casual clothes, those mock turtlenecks were bespoke, tailored garments made by a revered Japanese fashion designer. That is a world apart from throwing on whatever brand of T-shirt happens to feel comfortable to the wearer.
I guess these agreements mean that the property isn't full unrestricted property of the employee... and therefore income tax isn't payable when they vest.
The tax isn't avoided - it would just be paid when you sell the shares instead - which for most people would be a worse deal because you'll probably sell them at a higher price than the vest price.
I think that this clause is so non-standard for tech that it almost certainly got flagged or was explicitly discussed before being added that claiming that he didn't know it was there strains credulity badly.
This is dark.
It's not that I don't trust the mods explicitly, it's just that showing such numbers (if they exist) would be helpful for transparency.
You see the same pattern with social media accounts who claim to be on the Maxist-influenced left. Their tactics are very frequently emotionally abusive or manipulative. It's basically indistinguishable in style from how people on the fringe right behave.
Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.
Why wouldn’t they? I’m sure you can think of a couple of politicians and CEOs who in recent years have clearly demonstrated that no matter what they do or say, they will have a strong core of rabid fans eating their every word and defending them.
Pretty asinine response but I work in Hollywood and each studio lot has public tours giving anyone that wants a glimpse behind the curtain. On my shows, we’ve even allowed those people to get off the studio golf cart to peek inside at our active set. Even answering questions they have about what they see which sometimes explains Hollywood trickery.
I’m sure there’s tons of young programmers that would love to see and understand how such a long-lasting great community like this one persists.
The innovation on detecting patterns would be incredible, and in reality I think would be best to evolve into allowing user-decided algorithms that they personally subscribe to.
Changes in sentiment can be hard to measure, but changes in posting behavior seems incredibly easy to measure.
What's worse is that there's a ready line of journalists talking about how capital letters promote inequality or shit like that providing coverfire for them.
Oh I agree. I wasn't making it a right-vs-left thing, but rather neutering the idea that people perceive it to be.
I would not place myself on the political right at all -- even in the UK -- but I see this idea that bad-faith is an alt.right thing and I'm inclined to push back, because it's an oversimplification.
Not clear what you mean.
Do you mean it is generic to do that in contracts? (Been a while since I was offered equity.)
Or do you mean that even OpenAI would not try it without having set it up in the original contract? Because I hate to be the guy with the square brackets ;-)
Or would that have been an "if you break the law" thing?
Seems unlikely that OpenAI are legally in the clear here with nice clear precedent. Why? Because they are backflipping to deny it's something they'd ever do.
Even if in this specific instance he means well, it's still quite entertaining to interpret his statements this way:
"we have never clawed back anyone's vested equity"
=> But we can and will, if we decide to.
"nor will we do that if people do not sign a separation agreement"
=> But we made everyone sign the separation agreement.
"vested equity is vested equity, full stop."
=> Our employees don't have vested equity, they have something else we tricked them into.
"there was a provision about potential equity cancellation in our previous exit docs;"
=> And also in our current docs.
"although we never clawed anything back"
=> Not yet, anyway.
"the team was already in the process of fixing the standard exit paperwork over the past month or so."
=> By "fixing", I don't mean removing the non-disparagement clause, I mean make it ironclad while making the language less controversial and harder to argue with.
"if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too."
=> We'll fix the employee, not the problem.
"very sorry about this."
=> Very sorry we got caught.
There was one thing that I cared about (anti-competitive behavior, things could technically be illegal, but what counts is policy so it really depends on what the local authority wants to enforce), so I asked a lawyer, and they said: No way this agreement prevents you from answering that kind of questioning.
Because lawyers are in the business of managing risk, and knowing what OC was unhappy about was very much relevant to knowing if he presented a risk.
Exceptions require sign off and thinking. The optimal answer is go with the flow. In an employment situation, these sorts of terms require regulatory intervention or litigation to make them go away, so it’s a good bet that most employees will take no action.
> Comp clawbacks are quite common in finance
Common? Absolutely not. It might be common for a tiny fraction of investment bank staff who are considered (1) material risk takers, (2) revenue generators, or (3) senior management.OpenAI is different: they don’t grant options, but “Units” that are more like RSUs.
First of all, it beggars belief that this whole thing could be the work of HR people or lawyers or something, operating under their own initiative. The only way I could believe that is if they deliberately set up a firewall to let people be bad cops while giving the C-suite plausible deniability. Which is no excuse.
But...you don't think they'd have heard about it from at least one departing employee, attempting to appeal the onerous terms of their separation to the highest authority in the company?
Maybe it’s confirmation bias, but I do feel like the quality of discourse has taken a nose dive.
The people barking are actually the least worrisome, they’re highly engaged. The meat of your users say nothing and are only visible in-house.
That said, they also don’t give a shit about most of this. They want their content and they want it now. I am very confident spez knows exactly what he’s talking about.
They threatened to block the employee who pushed back on the non-disparagement from participating in tender offers, while allowing other employees to sell their equity (which is what the tender offers are for). This is not a "market term".
don't contact OpenAI legal, which leaves an unsavory paper trail
contact me directly, so we can talk privately on the phone and I can give you a little $$$ to shut you up
And you don't get the meal, either.
How long could someone write in ALL CAPS before they get fired?
And it works in part because things often are accidents - enough to give plausible deniability and room to interpret things favorably if you want to. I've seen this from the inside. Here are two HN threads about times my previous company was exposing (or was planning to expose) data users didn't want us to: [1] [2]
Without reading our responses in the comments, can you tell which one was deliberate and which one wasn't? It's not easy to tell with the information you have available from the outside. The comments and eventual resolutions might tell you, but the initial apparent act won't. (For the record, [1] was deliberate and [2] was not.)
[1] >>23279837
[2] >>31769601
since i've always typed like this i've joked with my mother that if i ever send her a message with proper capitalization and punctuation, its a secret signal that i've been kidnapped!
In tech I’ve never even heard a rumor of something like this.
future gpt prompt : "Take 200000 random comments and threads from hacker news, look at how they rank over time and make assumptions about how the moderation staff may be affecting what you consume. Precisely consider the threads or comments which have risque topics regarding politics or society or projects that are closely related to Hacker News moderation staff or Y Combinator affiliates."
All you can really do on the internet is ride the waves of synchronicity where the community and moderation is at harmony, and jump ship when it isnt! Any other conceit that some algorithm or innovation or particular transparency will be this cure all to <whatever it is we want> feels like it never pans out, the boring truth is that we are all soft squishy people.
Show me a message board that is ultimately more harmonious and diverse and big as this one!
companies say that all the time.
another way they do it is to say, it is company policy, sorry, we can't help it.
thereby trying to avoid individual responsibility for the iniquity they are about to perpetrate on you. .
> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.
1. Get cash infusion from microsoft
2. Do microsoft playbook of 'oh I didn't mean to be shady we will correct' when caught.
3. In the meantime there are uncaught cases as well as the general hand waving away of repeated bad behavior.
4. What sama did would get him banned from some -fetish- circles, if that says something about how his version of 'EA' deals with consent concerns.
It’s one of the startup catchphrases that brings people a lot of success when they’re small and people aren’t paying attention, but starts catching up when the company is big and under the microscope.
If you see it in a job advert, I'd assume the same for the people who are doing the hiring.
So think about that. They offer you an average to low base salary but sweeten the deal with some 'equity' saying that it gives you a stake in the company. Neglecting to mention of course, how many different ways equity can be invalidated; How a year in tech is basically a life time; And how the whole thing is kind of structured to prevent autonomy as an employee. Often founders will use these kind of offers to gauge 'interest' because surely the people who are willing to take an offer that's backed more by magic bean equity money (over real money) are truly the ones most dedicated to the companies mission. So not being grateful for such amazing offers would be taken as a sign of offence by most founders (who would prefer to pay in hopes and dreams if they could.)
Now... with a shitcoin... even though the price may tank to zero you'll at least end up with a goofy item you own at the end of the day. Equity... not so much.
Normally a company has to give you new "consideration" (which is the legal term for something of value) for you to want to sign an exit agreement - otherwise you can just not bother to sign. Usually this is extra compensation. In this case they are saying that they won't exercise some clause in an existing agreement that allows them to claw back.
Mistral even has Azure distribution.
FAIR is flat open-sourcing competitive models and has a more persuasive high-level representation learning agenda.
What cards? Brand recognition?
There was still potential to engage there:
"That's alright, as you said it's not enforceable anyway just remove it from everyone's
contract. It'll just be the new version of the contract for everyone."
Doubt it would have made any difference though, as the lawyer was super likely bullshitting.Why? OpenAI is a shitshow. Their legal structure is a mess. Yanking vested equity on the basis of a post-purchase agreement signed under duress sounds closer to securities fraud than anything thought out.
Even if that's true (and I'm not saying it is, or it isn't, I don't think anyone on the outside knows enough to say for sure), is it because they genuinely agree they did something egregiously wrong and they will really change their behavior in the future? Or is it just because they got caught this time so they have to fix this particular mistake, but they'll keep on using similar tactics whenever they think they can get away with it?
The impact of such uncertainty on our confidence in their stewardship of AI is left as an exercise for the reader.
whether i use them or not is basically a function of how much i think there will be consequences for not using them. if i do use them without coercion, it's for Emphasis, or acronyms (like AI), or maybe sPoNgEbOb CaSe
i'm not sure where AI CEOs, or younger generations picked it up. but the "only use capitals when coerced" part seems similar
HOW DO I WORK THIS DIFFERENCE ENGINE STOP
'yes plaese
Sent from my iPhone'
Definitely a 'I'm very busy look at me' powermove
I also remember when the internet was talking about the twenty four Reddit accounts that threatened to quit the site. It’s enlightening to see that the protest the size of Jethro Tull didn’t impact the site
ps "responsibility" means "zero consequences"
Although I suppose someone could claim the email was sent by mistake, and some deliberate changes aren't announced.
HN drives a boatload of traffic, so getting on the front page has economic value. That means there are 100% people out there who will abuse a published ranking system to spam us.
Huh I should read mine.
It's worth noting that Hanlon’s razor was not originally intended to be interpreted as a philosophical aphorism in the same way as Occam’s:
> The term ‘Hanlon’s Razor’ and its accompanying phrase originally came from an individual named Robert. J. Hanlon from Scranton, Pennsylvania as a submission for a book of jokes and aphorisms, published in 1980 by Arthur Bloch.
https://thedecisionlab.com/reference-guide/philosophy/hanlon...
Hopefully we can collectively begin to put this notion to rest.
Aspirations keep people voting against their interests.
I personally worry that the way fans of OpenAI and Stability AI are lining up to criticise artists for demanding to be compensated, or accusing them of “gatekeeping” could be folded into a wider populism, the way 4chan shitposting became a political position. When populism turns on artists it’s usually a bad sign.
I think your sample frame is off, they did themselves unforced damage in the long run.
Anything else people read into it is very often just projection.
It should not be surprising that the outcomes are different.
They’re great partners when confronted with this kind of contract. And fundamentally, if my adversary/future employer retains counsel, I should too. Why be at a disadvantage when it’s so easy to pay money and be at even?
There are some areas my ethics don’t mesh with, but at the end of the day this is my work and I do it for pay. And when I look at results, lawyers are the best investment I have ever made.
5. Sam Altman
I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.
Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"
What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.
https://www.ftc.gov/news-events/news/press-releases/2024/04/...
Now Sam is seen as fucking with said stock, so maybe that isn’t panning out. Amazing surprise.
I love this bullshit sentence formulation that claims to both have known this already--as in, don't worry we're ALREADY on the case--and they're simultaneously embarrassed that they "just" caught it--a.k.a. "wow, we JUST heard about this, how outRAGEOUS".
i don't normally do it anymore, but for this post i've gone sans-caps. kickin it old school. (yaimadork)
However the fact that the corporate leadership could even make those threats to not-yet-departed employees indicates that something is already broken or missing in the legal relationship with current ones.
A simple example might for the company to clearly state in their handbook--for all current employees--that vested shares cannot be clawed back.
It's definitely had a very impact - but since it's not one that's likely to hit the bottom line in the short term, it's not like it matters in any way beyond the user experience.
That said, I think you could easily correlate my hn activity with my reddit usage (inverse proportionality). Loving it tbh, higher quality content overall and better than slashdot ever was
The documents show this really was not a mistake and "I didn't know what the legal documents I signed meant, which specifically had a weird clause that standard agreements don't" isn't much of a defence either. The whole thing is just one more point in favor of how duplicitous the whole org is, there are many more.
How would you interpret this part?
> and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.
This is interesting - was it mutual for most people?
This is a very standard psychopathic behavior.
They (psychopaths) typically milk the willingness of their victims to accept the apology and move on to the very last drop.
Altman is a high-iq manipulative psychopath, there is a trail of breadcrumb evidence 10 miles long at this point.
Google "what does paul graham think of Sam Altman" if you want additional evidence.
People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere, consumers will continue using GPT, businesses will keep hyping it up and rivers of cash will flow per status quo to his pockets like no tomorrow.
If one thoroughly wants to to make a change, one should support alternative open source models to remove our dependency on Altman and co; I fear for a day where such powerful technology is tightly controlled by OpenAI. We have already given up so much our computing freedom away to handful of companies, let's make sure AI doesn't follow. Honestly,
I wonder if we would ever have access to Linux, if it were to be invented today?
That does not mean you should not hear someone out. As far as I am aware Annie said Sam and their brother molested her as a kid. He claims otherwise, and deflects with “she is a drug addict” (heavily paraphrasing here). Lots of talk of how her trust was broken, and it is impossible to get justice against someone so rich and powerful, etc. where sama’s camp claim it is a money grab and there is zero proof. A sticky wicket.
Now, whether all these “new” revelations (honestly never thought Sam was honest) help support her claims is up to you. Just wanted to add some context for those unaware. Not accusing anyone.
5. Sam Altman
I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.
Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"
What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.
https://p@ulgraham.com/5founders.html *edited link due to first post getting deleted
It's such an insidious idea that we ought to accept that you can just give up your promises you explicitly made once those rules get into your way of doing exactly what they were supposed to prevent. That's not anyone else's problem, that was the point! The people that can't do that are supposed to align AI? They can't align themselves
Lots of people have pointed out problems with your determination, but here's another one: can you really tell none of those people are posting to subvert reddit? I'm not going to go into details for privacy reasons, but I've "quit" websites in protest while continuing to post subversive content afterwards. Even after I "quit," I'm sure my activity looked good in the site's internal metrics, even though it was 100% focused on discouraging other users.
Investors don't really care about consequences that don't hit the bottom line prior to an exit. Consumers are largely driven by hype. Throw a shiny object out there and induce FOMO, you'll get customers.
What we don't have are incentives for companies to give a damn. While that can easily lead to a call for even more government powers and regulation, in my opinion we won't get anywhere until we have an educated populous. If the average person either (a) understood the potential risks of actual AI or (b) knew that they didn't understand the risks we wouldn't have nearly as much money being pumped into the industry.
Yet you use "an" for a vowel that's miles away, so I don't like the way you type either.
It's not really the tech that is negative it is the humans manipulating it for profit and power, and behaving obnoxiously. The tech is very useful.
"thx" is way to verbose for anyone but a plebs, the real power brokers use "ty." Or they don't thank anyone at all, because they know just bothering to read the message they got is thanks enough.
Aside: "full stop" is the Commonwealth English way of saying "period" so it seems like an affectation to see an American using it.
But even without that, judges have huge amounts of leeway to “create” an ex post facto contract and say “heres the version if that contract you would have agreed to, this is now the contract you signed”. A sort of “fixed” version of the contract.
Severability clauses themselves are not necessarily valid; whether provisions can be severed and how without voiding the contract is itself a legal question that depends on the specific terms and circumstances.
Well, no:
> We're removing nondisparagement clauses from our standard departure paperwork, and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual. We'll communicate this message to former employees.
So the former successfully blackmailed employees, stay blackmailed.
It's a worse deal in retrospect for a successfull company. But there and then it's not very attractive to pay an up-front tax on something that you can sell at an unknown price in the relatively far future.
But I guess anyone could be silenced with enough economic incentive?
The percentage of HN users defending Altman has dropped massively since the board scandal ~6 months ago.
>consumers will continue using GPT, businesses will keep hyping it up
Customers will use the best model. If OpenAI loses investors and talent, their models may not be in the lead.
IMO the best approach is to build your app so it's agnostic to the choice of model, and take corporate ethics into consideration when choosing a model, in addition to performance.
But maybe there's a further step that someone like OpenAI seems uniquely capable of evolving.
That's not saying anything OpenAI or Altman do is excusable, no way. I just feel like there's almost no good guys in this story.
OpenAI is worth 100B. At this level, a founder would have been worth $20B at least.
But Sam aren't getting any of that net worth but he gets all the bad reps that comes with running a 100B company.
- The Wretched of the Earth, Frantz Fanon
If you can do X in the first place, I don't think there's any general rule that you can't condition X on someone not signing a contract.
Also, how much is there to customize in a turtleneck? Seems like the same signal as a very expensive suit, "I have a lot of money", nothing more.
Which is interesting, because it's sacrilege to insinuate that it's being gamed at all.
I'm curious, what do you think deleting accounts and starting new is going to do?
They'll just link it all together another way.
The short version is that users flagged that one plus it set off the flamewar detector, and we didn't turn the penalties off because the post didn't contain significant new information (SNI), which is the test we apply (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). Current post does contain SNI so it's still high on HN's front page.
Why do we do it this way? Not to protect any organization (including YC itself, and certainly including OpenAI or any other BigCo), but simply to avoid repetition. Repetition is the opposite of intellectual curiosity (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...), which is what we're hoping to optimize for (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...).
I hesitate to say "it's as simple as that" because HN is a complicated beast and there are always other factors, but...it's kind of as simple as that.
It's pretty established now that they had some exceptionally anti-employee provisions in their exit policies to protect their fragile reputation. Sam Altman is bluntly a liar, and his credibility is gone.
Their stance as a pro-artist platform is a joke after the ScarJo fiasco, that clearly illustrates that creative consent was an afterthought. Litigation is assumed, and ScarJo is directly advocating for legislation to prevent this sort of fiasco in the future. Sam Altman's involvement is again evident from his trite "her" tweet.
And then they fired their "superalignment" safety team for good measure. As if to shred any last measure of doubt that this company is somehow more ethical than any other big tech company in their pursuit of AI.
Frankly, at this point, the board should fire Sam Altman again, this time for good. This is not the company that can, or should, usher humanity into the artificial intelligence era.
"this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have."
The first thing the above conjures up is the other disgraced Sam (Bankman-Fried) saying "this is on me" when FTX went bust. I bet euros-to-croissants I'm not the only one to notice this.
Some amount of corporate ruthlessness is part of the game, whether we like it or not. But these SV robber-barrons really crank it up to something else.
[1] >>40425735
This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company. In reality, the expectation is that a CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.
But it doesn't vary based on specific persons (not Sam or anyone else). Substantive criticism is fine, but predictable one-liners and that sort of thing are not what we want here—especially since they evoke even worse from others.
The idea of HN is to have an internet forum—to the extent possible—where discussion remains intellectually interesting. The kind of comments we're talking about tend to choke all of that out, so downweighting them is very much in HN's critical path.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
OpenAI doesn't have shares per se, since they're not a corporation but some newfangled chimeric entity. Given the man who signed the documents allegedly didn't read them, I'm not sure why one would believe everything else is buttoned up.
Then I strike the offending passage out on both copies of the contract, sign and hand it back to them.
Your move.
¯\_(ツ)_/¯
In the original context, it sounded very much like he was referring to clawed-back equity. I’m trying to find the link.
This is the final comment [1] that got Michael’s account banned.
You can see dang’s reply [2] directly underneath his which says:
> We've banned this account.
1: >>10017538
2: >>10019003
I was trying to be a bit restrained in my criticism; otherwise, it gets too repetitive.
"We're replacing them with even more draconian terms that are not technically nondisparagement clauses"
> and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.
"We offered some employees $1 in exchange for signing up to the nondisparagement clause, which technically makes it a binding contract because there was an exchange of value."
These are the hottest controversial events so far, in a chronological order:
OpenAI's deviation from its original mission (https://news.ycombinator.com/item?id=34979981).
The Altman's Saga (https://news.ycombinator.com/item?id=38309611).
The return of Altman (within a week) (https://news.ycombinator.com/item?id=38375239).
Musk vs. OpenAI (https://news.ycombinator.com/item?id=39559966).
The departure of high-profile employees (Karpathy: https://news.ycombinator.com/item?id=39365935 ,Sutskever: https://news.ycombinator.com/item?id=40361128).
"Why can’t former OpenAI employees talk?" (https://news.ycombinator.com/item?id=40393121).It suggests humans makes mistakes and sometimes own up to them - which is a good thing.
> CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.
There is no human who does this , or are you saying turn the CEO role over to AI? :)
At one of my first jobs as a student employee they offered me a salary X. In the contract there was some lower number Y. When I pointed this out, they said "X includes the bonus. It's not in the contract but we've never not paid it". OK, if this is really guaranteed, you can make that the salary and put it in writing. They did, my salary was X and that year was the first time they didn't pay the optional bonus. Didn't affect me, because I had my salary X.
IANAL and I don't know how binding this is. I'd think it's crucial for it to be in both copies of the contract, otherwise you could have just crossed it out after the fact, which would of course not be legally binding at all and probably fraud (?)
In practice, it doesn't really come up, because the legal department will produce a modified contract or start negotiating the point. The key is that the ball is now in their court. You've done your part, are ready and rearin' to go, and they are the ones holding things up and being difficult, for something that according to them isn't important.
UPDATE:
I think it's important to note that I am also perfectly fine with a verbal agreement.
A working relationship depends on mutual trust, so a contract is there for putting in a drawer and never looking at it again...and conversely if you are looking at it again after signing, both the trust and the working relationship are most likely over.
But it has to be consistent: if you insist on a binding written agreement, then I will make sure what is written is acceptable to me. You don't get to pick and choose.
AI raises all sorts of extremely non-tech questions about power, which causes all the drama.
Edit: also, they've selected for people who won't ask ethical questions. Thus running into the classic villain problem of building an organization out of opportunistic traitors.
They think they are about to change the entire world. And a very large but of the world agrees. (I personally think it's a great tool but exaggerated)
But that created an very big power play where people don't act normal anymore and the most powerhungry people come out to play.
Now that LLM alternatives are getting better and better, as well as having well funded competitors. They don't yet have seem to developed a new, more advanced technology. What's their long term moat?
Oh! free speech is on trade! We used to hear the above statement coming from some political regimes but this is the first time I read it in the tech world. Would we live to witness more variations of this behavior on a larger scale?!
> High-pressure tactics at OpenAI
> That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars
> When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI.
> “We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,”
Although they've been able to build the most capable AI models that could replace a lot of human jobs, they struggle to humanely manage the people behind these models!!
One is a well meaning but very naive older person who desperately wants to be liked by the cool kids, the other is a pretentious young conman who soars to the top by selling his “vision”. Michael is a huge simp for Ryan and thinks of himself as Ryan’s mentor, but is ultimately backstabbed by him just like everyone else.
Considering the considerable effort that has gone into this by the time you are negotiating a contract, letting it fail over something that "is not important" and "is never enforced" would be very stupid of them.
So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.
Neither of which is a great advertisement for the company as an employer.
"i've been genuinely embarrassed" --> "yep, totally not my fault actually"
"I should have known" --> "other people fucked this up, and they didn't even inform me"
Most of the time is basically just FUD, to coerce people into following the rule-that-is-never-enforced
You correctly interpreted the point I was making — Steve Jobs treated his casual look as seriously as others treat an expensive tailored suit. And the result means he's still signalling importance and success, without also signalling conformity and "old world" corporate vibes.
The article makes it clear that it wasn't a mistake at all. It's a lie. They were playing hardball, and when it became public they switched to PR crisis management to try and save their "image", or what's left of it.
They're not the good guys. I'd say they're more of a caricature of bad guys, since they get caught every time. Something between a classic Bond villain and Wile E. Coyote.
And standard doesn't mean shit... Every regime in the history of mankind had standards!
Were they really stupid enough to think that the amount of money being offered would bend some of the most principled people in the world?
Whoever allowed those clauses to be added and let them remain has done more damage to the public face of OpenAI than any aggravated ex-employee ever could.
First of all, taking any code with you is theft, and you go to jail, like this poor Goldman Sachs programmer [1]. This will happen even if the code has no alpha.
However, noone can prevent you from taking knowledge (i.e. your memories), so reimplementing alpha elsewhere is fine. Of course, the best alpha is that which cannot simply be replicated, e.g. it depends on proprietary datasets, proprietary hardware (e.g. fast links between exchanges), access to cheap capital, etc.
What hedge funds used to do, is give you lengthy non-competes. 6months for junior staff, 1-2y for traders, 3y+ in case of Renaissance Technologies.
In the US, that's now illegal and un-enforceable. So what hedge funds do now, is lengthy garden(ing) leaves. This means you still work for the company, you still earn a salary, and in some (many? all?) cases also the bonus. But you don't go to the office, you can't access any code, you don't see any trades. The company "moves on" (developes/refines its alpha, including your alpha - alpha you created) and you don't.
These lengthy garden leaves replaced non-competes, so they're now 1y+. AFAIK they are enforceable, just as non-competes while being employed always have been.
[1] https://nypost.com/2018/10/23/ex-goldman-programmer-sentence...
We have to look at the reality that the worst excesses of the new Silicon Valley culture aren’t stemming from the adults sent to run the ship anymore, and they aren’t stemming from the nerds those adults co-opt anymore either.
The worst excesses of the new Silicon Valley culture are coming from nerds who are empowered and rewarded for their superpower of being unable to empathise.
And I say that as someone who is back to being almost a hermit. We got here by paying people like us and not insisting we try to stop saying what we think without pausing first to think about how it will be received by people not like us.
It’s not a them-vs-us thing now. It’s us-vs-us.
If you're autistic, have an extra chromosome, or will admit they are genuinely dumb. I'll apologize. But otherwise, nah.
It's all just drama to draw attention and investor money, that's it.
When the inventors leave there is nothing left to do but sell more.
With the breakneck progress of AI over the last year, there has been a clear trend in the media of "Wow, this is amazing (and a little scary)" to "AI is an illegal dumpster fire and needs to be killed you should stop using it and companies should stop making it"
[1] The keywords are promissory estoppel. I'm not a lawyer but this looks at least like a borderline case worth worrying about.
Nobody has to use youtube either.
If you want change in the video platform space, either be willing to pay a subscription or watch ads.
Consumers don't want to do either, and hence no one wants to enter the space.
In the intro, Patrick goes off-script to make a joke about how last year he'd interviewed SBF, which was "clearly the wrong Sam".
I'm eagerly waiting for 2025, when he interviews some new Sam and is able to recycle the joke. :)
Just to give a sickening example, I was approached by the CEO to fix a very bad deepfake video that some "AI" Engineer made with tools available. They requested me to use After Effects and editing to make the lips sync....
On top of that, this industry is driving billions of investment into something that is probably the death sentence for a lot of workers, cultures, and society, and that is not fixing or helping in ANY other way to our current world problems.
> I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it
Life rule: if the party you're negotiating a contact with says anything like "don't worry about that, it's not enforceable" or "it's just boilerplate, we never enforce that" but refuses to strike it from the contract then run, don't walk, away from table. Whoever you're dealing with is not operating in good faith.
Do not sign a contract unless you are willing to entirely submit to everything in it that is legally binding.
Also be careful with extremely vague contracts. My employment contract was basically "You will do whatever we need you to do" and surprise surprise, unpaid overtime is expected.
Did you know that people who have been involved with Y Combinator who make an account on HN can see everyone else who has been a part of Y Combinator? Their usernames are highlighted a different color.
It's a literal secret club that they rarely acknowledge.
Often RSUs in non public companies come with a double trigger: you need both the vest to happen and a liquidity to happen for the actual delivery of those, so no tax implications until a liquidity event (afaik. But don’t take tax advice from randos on the internet).
In the US, equity given as compensation for work could be taxed as wages, or, under certain circumstances, as capital gains.
The one year is for some capital gains to get considered long term gains, which may be taxed at a lower marginal rate than regular wages.
In other words, if you are granted equity as compensation, go talk at length to a tax professional to get an understanding of the taxation of it.
In my experience, founders need all the help they can get making friends, I'm glad they have a little club!
Edit - sry why is this the top comment
I would also suggest such conversation would need to be corralled into some sort of secondary HN forum branch, discussion on observations, insights, etc. In general it could be useful for people to also learn about observing patterns for their own sites they own or manage.
I do understand it can facilitate a bit of a "weapons" race, in that if there are bad actors seeking to have many human looking bot accounts (or a single person orchestrating many accounts), then they now too would see how their fingerprints look compared to others as well.
Ultimately I think Elon Musk is right though that to help dissuade spam and organizations-ideologues from shaping narratives and controlling what's allowed to be seen and discussed, that an actual $ cost is required.
Perhaps HN could implement a $5/month (or even higher tiers)? For most on HN, if they are in the tech field, even $50/month for arguably a higher curated-"more moderated" forum, isn't much for the individual - and if a filter to only show posts and/or comments by those paying AND/OR better yet, filtering based on including only votes by those at different tiers - then that is affordable compared to say someone who maybe somehow is running 1,000 users; although unfortunately $50,000/month isn't much for organizations or nations with an agenda, if that's all it takes to keep certain truths suppressed as much and as quickly as possible.
> “We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be.”
Yeah, right. Words don't necessarily reflect one's true values, but actions do.
And to the extent that they really are "incredibly sorry", it's not because of what they did, but that they got caught doing it.
(The handshake is probably not a legal requirement, though I suppose it could be taken into consideration as evidence -- "You even shook hands on it, so you must have realised that what you had just discussed were atually the terms you were agreeing to.")
> I have worked for multiple startups (Malwarebytes...
Note the "have worked" and the rather long list of places they've worked. If that list is in chronological order (sure didn't look alphabetical), Malwarebytes doesn't have to be a startup now for it to have been one when GP worked there.
My best friend is a lawyer, so heck knows how difficult that test can be -- he passed it. ;-)
Unless and until that's what they say, looks like they’re not doing that.
Think I'll be staying away from "AI" for a while longer.
Sorry, you may be (are probably) being perfectly honest and sincere, but... It's still too many coincidences not to give rise to doubts. If about nothing else, then about the weights in your algorithms (the post of a negative-headline article that lasted under two hours on the front page didn't look all that much more flame-war-y to me than the one off a positive-headline one that lasted over twelve) or your definition of "significant" ("Breaking news, OAI says they didn't do it!" Yeah right, how is that significant; which crook doesn't say that wasn't actually his paw in the cookie jar?).
Or maybe it's bigger; maybe it's time for the, uhm, "tone" of the entire site to change? I mean, society at large seems to have come to the insight that wannabe rentier techbros just aren't likely to be good guys. And maybe your intended audience -- "startup hackers", something like that, right? -- are beginning to resemble mainstream society in this respect?
Maybe we "Hackers" are coming to the realisation that on the current trajectory of the tech industry in the late-stage capitalism era, "two guys with their laptops in a garage" are not very likely to become even (paltry!) multi-millionaires, because all the other "two guys with their laptops in a garage" ten-fifteen-twenty years ago (well, the ones of them that made it, anyway) installed such insurmountable moats around their respective fiefdoms ("pulled up the ladder behind them", as we'd say if they were twenty years older) that making it big as an actual "Hacker" is becoming nigh-impossible?
I mean, to try and illustrate by example: The Zuck zucks even in the mind of most HN regulars, right? But if you trawl through early posts (pre-2017? -14? -10?), betcha he's on average much more revered here than he is now. A bit like Musk still seems to be, and up until a year or whatever ago, that other Sam (Frazzled Blinkman?) was, and... The rate and mechanism of change here seems to be "Oops, yet another exception, a wannabe rentier techbro who turned out to be a slimebag. But as a group, wannabe rentier techbros are of course still great!" Maybe it's time to go through the algorithms and prejudices and turn down all the explicit and implicit the "Wannabe Zuck[1] = Must be great!" dials?
Because as it is, these biases -- "just percieved" as you seem to be saying, "implicit and built into the very foundations of HN" as I'd tentatively say; does it even matter which it is? -- seem to be making HN not a forum for the current-day "two guys with their laptops in a garage", but for fanboyism of the (current and) next group of Bezoses, Zuckerbergs and Musks[1]. Sorry, I haven't checked out the mission statement recently (even though you so graciously linked to it), but is that really what HN is supposed to be?
___
[1]: Well, I'm old enough that I almost wrote "Gates and Ellison" there... Add them in if you want.
No, "since 2015" is by definition not "a long history".
For a long history, try the principality of San Marino, or (if you want a company) Stora Kopparbergs Bergslags AB. Or one of the Japanese temple-builder family companies. 2015 "a long history" -- wasn't that when I last took a dump?
However it can be disputed, and a company could argue about the timing or details.
That’s why you’re often asked to initial changes, makes it clear that both parties have agreed to the modifications.
First you’re offering up a lot of trust to people you might have just started working with.
Or, they could be very trustworthy and just remember things differently. And of course people come and go in companies all the time they just might not be there.
At least if you do a verbal agreement follow it up with an email confirming the details.
The key word there is "seemingly". You notice what you're biased to notice and generalize based on that. People with diferent views notice different things and make different generalizations.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
OpenAI's deviation from its original mission - >>34979981
The Altman's Saga – >>38309611 ).
The return of Altman (within a week) ->>38375239
Musk vs. OpenAI - >>39559966
The departure of high-profile employees -
Karpathy: >>39365935
Sutskever: >>40361128
"Why can’t former OpenAI employees talk?" - >>40393121