You have to be really attuned to "is this actually rational or sound right, or am I adding in an implicit 'but we're good people, so,'"
Obviously that should not be possible any more with these leaked documents, given they prove both the existence of the scheme and Altman and other senior leadership knowing about it. Maybe they thought that since they'd already gagged the ex-employees, nobody would dare leak the evidence?
>Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about.
>OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.
>Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.
Let's say I find a profitable niche while working for a project and we decide to open a separate spin off startup to handle that idea. I'd expect legality to be handled for me, inherited from the parent company.
Now let's also say the company turns out to be disproportionately successful. I'd say I would have a lot on my plate to worry about, the least of which the legal part that the company inherited.
In this scenario it is probable that hostile clauses in contracts would be dug up. I surely would be legally responsible for them, but how much would I be to blame for them, truly?
And if the company handles the incident well, how important should that blame putting be?
And the employees also have way more leverage than Reddit users; at this point they should still be OpenAI's greatest asset. Even once this is fixed (which they obviously will do, given they got caught), it's still going to cause a major loss of trust in the entire leadership.
It accelerated rapidly with some trends like the Tea Party, Gamergate, Brexit, Andrew Wakefield, covid antivax, and the Ukraine situation, and is in evidence on both sides of the trans rights debate, in doxxing, in almost every single argument on X that goes past ten tweets, etc.
It's something many on the left have generally identified as worse from the right wing or alt.right.
But this is just because it's easier to categorise it when it's pointing at you. It's actually the primary toxicity of all argument in the 21st century.
And the reason is that weaponised bad faith is addictive fun for the operator.
Basically everyone gets to be Lee Atwater or Roger Stone for a bit, and everyone loves it.
I know extremely desirable researchers who refuse to work for Elon because of how he has historically treated employees. Repeated issues like this will slowly add OpenAI to that list for more people.
Just as Reddit users stay on Reddit because there is nowhere else to go, the reality is that everyone worships leadership because they keep their paychecks flowing.
The relevant stakeholders here are the potential future employees, who are seeing in public exactly how OpenAI treats its employees.
Changes like that are hard to measure.
I didn’t post about not engaging with or using the platform anymore. Nor did I delete my account, since it still holds some value to me. But I slinked away into the darkness and now HN is my social media tool.
That sounds like a really bad idea for many many reasons. Lawyers are cheap compared to losing control, or even your stake, to legal shenanigans.
It depends a bit by what you mean by left and right, but if you take something like Marxism that was always 100% a propaganda effort created by people who owned newspapers and the pervasiveness of propaganda has been a through line e.g. in the Soviet Union, agitprop etc. A big part of the Marxist theory is that there is no reality, that social experience completely determines everything, and that sort of ideology naturally lends itself to the belief that blankets of bad faith arguments for "good causes" are a positive good.
This sort of thinking was unpopular on the left for many years, but it's become more hip no doubt thanks to countries like Russia and China trying to re-popularize communism in the West.
I think perhaps I didn't really make it totally clear that what I'm mostly talking about is a bit closer to the personal level -- the way people fight their corners, the way twitter level debate works, the way local politicians behave. The individual, ghastly shamelessness of it, more than the organised wall of lies.
Everyone getting to play Roger Stone.
Not so much broadcast bad faith as narrowcast.
I get the impression Stalinism was more like this -- you know, you have your petty level of power and you _lie_ to your superiors to maintain it, but you use weaponised bad faith to those you have power over.
It's a kind of emotional cruelty, to lie to people in ways they know are lies, that make them do things they know are wrong, and to make it obvious you don't care. And we see this everywhere now.
You see the same pattern with social media accounts who claim to be on the Maxist-influenced left. Their tactics are very frequently emotionally abusive or manipulative. It's basically indistinguishable in style from how people on the fringe right behave.
Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.
Why wouldn’t they? I’m sure you can think of a couple of politicians and CEOs who in recent years have clearly demonstrated that no matter what they do or say, they will have a strong core of rabid fans eating their every word and defending them.
Changes in sentiment can be hard to measure, but changes in posting behavior seems incredibly easy to measure.
Oh I agree. I wasn't making it a right-vs-left thing, but rather neutering the idea that people perceive it to be.
I would not place myself on the political right at all -- even in the UK -- but I see this idea that bad-faith is an alt.right thing and I'm inclined to push back, because it's an oversimplification.
Maybe it’s confirmation bias, but I do feel like the quality of discourse has taken a nose dive.
The people barking are actually the least worrisome, they’re highly engaged. The meat of your users say nothing and are only visible in-house.
That said, they also don’t give a shit about most of this. They want their content and they want it now. I am very confident spez knows exactly what he’s talking about.
I also remember when the internet was talking about the twenty four Reddit accounts that threatened to quit the site. It’s enlightening to see that the protest the size of Jethro Tull didn’t impact the site
Aspirations keep people voting against their interests.
I personally worry that the way fans of OpenAI and Stability AI are lining up to criticise artists for demanding to be compensated, or accusing them of “gatekeeping” could be folded into a wider populism, the way 4chan shitposting became a political position. When populism turns on artists it’s usually a bad sign.
I think your sample frame is off, they did themselves unforced damage in the long run.
It's definitely had a very impact - but since it's not one that's likely to hit the bottom line in the short term, it's not like it matters in any way beyond the user experience.
That said, I think you could easily correlate my hn activity with my reddit usage (inverse proportionality). Loving it tbh, higher quality content overall and better than slashdot ever was
People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere, consumers will continue using GPT, businesses will keep hyping it up and rivers of cash will flow per status quo to his pockets like no tomorrow.
If one thoroughly wants to to make a change, one should support alternative open source models to remove our dependency on Altman and co; I fear for a day where such powerful technology is tightly controlled by OpenAI. We have already given up so much our computing freedom away to handful of companies, let's make sure AI doesn't follow. Honestly,
I wonder if we would ever have access to Linux, if it were to be invented today?
Lots of people have pointed out problems with your determination, but here's another one: can you really tell none of those people are posting to subvert reddit? I'm not going to go into details for privacy reasons, but I've "quit" websites in protest while continuing to post subversive content afterwards. Even after I "quit," I'm sure my activity looked good in the site's internal metrics, even though it was 100% focused on discouraging other users.
The percentage of HN users defending Altman has dropped massively since the board scandal ~6 months ago.
>consumers will continue using GPT, businesses will keep hyping it up
Customers will use the best model. If OpenAI loses investors and talent, their models may not be in the lead.
IMO the best approach is to build your app so it's agnostic to the choice of model, and take corporate ethics into consideration when choosing a model, in addition to performance.
But maybe there's a further step that someone like OpenAI seems uniquely capable of evolving.
I'm curious, what do you think deleting accounts and starting new is going to do?
They'll just link it all together another way.