However they could add this to new employee contracts.
(Don’t have X) - is there a timeline? Can I curse out the company on my deathbed, or would their lawyers have the legal right to try and clawback the equity from the estate?
https://www.clarkhill.com/news-events/news/the-importance-of...
I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.
Even if they had a massive, successful and public safety team, and got alignment right (which I am highly doubtful about being possible) it is still going to happen as massive portions of white collar workers loose their jobs.
Mass protests are coming and he will be an obvious focus point for their ire.
Diamond multi-million dollar hand-cuffs which OpenAI has bound lifetime secret service-level NDAs which are another unusual company setting after their so-called "non-profit" founding and their contradictory name.
Even an ex-employee saying 'ClosedAI' could see their PPUs evaporate in front of them to zero or they could never be allowed to sell them and have them taken away.
https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
Maybe the agreement is "we will accelerate vesting of your unvested equity if you sign this new agreement"? If that's the case then it doesn't sound nearly so coercive to me.
Pushing unenforceable scare-copy to get employees to self-censor sounds on-brand.
He's already perceived by some as a bit of a scoundrel, if not yet a villain, because of World Coin. I bet he'll hit supervillain status right around the time that ChatGPT BattleBots storm Europe.
They're really lending employees equity, subject to the company's later feelings as to whether the employee should be allowed to keep or sell it.
But first amendment basically only restricts the government's ability to suppress speech, not the ability of other parties (like OpenAI).
This restriction may be illegal, but not on first amendment ("free speech") grounds.
Fucking monkeys.
It does not prevent you from entering into contracts with other private entities, like your company, about what THEY allow you to say or not. In this case there might be other laws about whether a company can unilaterally force that on you after the fact, but that's not a free speech consideration, just a contract dispute.
See https://www.themuse.com/advice/non-disparagement-clause-agre...
They're not required to sign anything other than a general release of liability when they leave to preserve their rights. They don't have to sign a non-disparagement clause.
But they'd need a very good lawyer to be confident at that time.
So yes, they're that fragile.
All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.
The first amendment is a US free speech protection, but it's not prototypical.
You can also find this in some other free speech protections, for example that in the UDHR
>Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
doesn't refer to states at all.
Boomberg famously used this as an employment contract, and it was a campaign scandal for Mike.
Consider for example that when Amazon bought the Ring security camera system, it had a “god mode” that allowed executives and a team in Ukraine unlimited access to all camera data. It wasn’t just a consumer product for home users, it was a mass surveillance product for the business owners:
https://theintercept.com/2019/01/10/amazon-ring-security-cam...
The EFF has more information on other privacy issues with that system:
https://www.eff.org/deeplinks/2019/08/amazons-ring-perfect-s...
These big companies and their executives want power. Withholding huge financial gain from ex employees to maintain their silence is one way of retaining that power.
OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.
Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"
To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.
It's a lot of mental work to rally the emotion of revulsion over the evil they might be doing that is kept secret.
As for other companies that can pay: I can only assume that the cost to bribe skilled workers isn't worth the perceived risk and cost of lawsuits from the downfall (which they may or may not be able to settle). Generative AI is still very young and under a lot of scrutiny on all fronts, so the risk of a whistle blower at this stage may shape the entire future of the industry at large.
Individualistic
No body depends on you, I hope
I feel that this particular case is just another reminder of that, and now would make me require a preemptory “no equity clawbacks” clause in any contract.
What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.
Link? Is the ~2 year timeline a common estimate in the field?
He probably already knows that, but doesn't care as long as OpenAI has captured the world's attention with ChatGPT generating them billions and their high interest in destroying Google.
> Mass protests are coming and he will be an obvious focus point for their ire.
This is going to age well.
Given that no-one knows the definition of AGI, then AGI can mean anything; even if it means 'steam-rolling' any startup, job, etc in OpenAI's path.
https://openai.com/index/introducing-superalignment/
> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
> While superintelligence seems far off now, we believe it could arrive this decade.
> Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:
> How do we ensure AI systems much smarter than humans follow human intent?
> Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.
In general, an agreement to agree is not an agreement. A requirement for a "general release" to be signed at some time in the future is iffy. And that's before labor law issues.
Someone with a copy of that contract should run it through OpenAI's contract analyzer.
Also, when secrets or truthful disparaging information is leaked anonymously without a metadata trail, I'm thinking there's probably little or no recourse.
0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)
People are different! You can think otherwise.
I was once fired, ghosted style, for merely being in the same meeting room as a racist corporate ass-clown muting the conference call to make Asian slights and monkey gesticulations. There was no lawsuit or payday because "how would I ever work again?" was the Hobson's choice between let it go and a moral crusade without a way to pay rent.
If instead I were upset that "not enough N are in tech," there isn't a specific incident or person to blame because it'd be a multifaceted situation.
That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.
moral stands are never free, but they are freeing.
It reeks of a scammer's mentality.
What a horrific medium of communication. Why anyone uses it is beyond me.
> I don't think it's going to happen next year, it's still useful to have the conversation and maybe it's like two or three years instead.
This doesn't seem like a super definite prediction. The "two or three" might have just been a hypothetical.
Superintelligence that can be always ensured to have the same values and ethics as current humans, is not a superintelligence or likely even a human level intelligence (I bet humans 100 years from now will see the world significantly different than we do now).
Superalignment is an oxymoron.
Humans are used to ordering around other humans who would bring common sense and laziness to the table and probably not grind up humans to produce a few more paperclips.
Alignment is about getting the AGI to be aligned with the owners, ignoring it means potentially putting more and more power into the hands of a box that you aren't quite sure is going to do the thing you want it to do. Alignment in the context of AGIs was always about ensuring the owners could control the AGIs not that the AGIs could solve philosophy and get all of humanity to agree.
AI experts who aren't riding the hype train and getting high off of its fumes acknowledge that true AI is something we'll likely not see in our lifetimes.
It’s yet another sign that the AI bubble will soon burst. The laughable release of “GPT-4o” was just a small red flag.
Got to keep the soldiers in check while the bean counters prep the books for an IPO and eventual early investor exit.
Almost smells like a SoftBank-esque failure in the near future.
Companies can cancel your vested equity for any reason. Read your employment contract carefully. For example, most RSU grants have a 7 year expiration. Even for shares that are vested, regardless of whether you leave the company or not, if 7 years have elapsed since they were granted, they are now worthless.
all this said, in bigger picture I can understand not divulging trade secrets but not being allowed to discuss company culture towards AI safety essentially tells me that all the Sama talk about the 'for the good of humanity' is total BS. at the end of day its about market share and bottom line.
> Whoa whoa whoa, we can't let just anyone run these models. Only large corporations who will use them to addict children to their phones and give them eating disorders and suicidal ideation, while radicalizing adults and tearing apart society using the vast profiles they've collected on everyone through their global panopticon, all in the name of making people unhappy so that it's easier to sell them more crap they don't need (a goal which is itself a problem in the face of an impending climate crisis). After all, we wouldn't want it to end up harming humanity by using its superior capabilities to manipulate humans into doing things for it to optimize for goals that no one wants!
This is the article that the author talks about on X.
This is the most concise takedown of that particular branch of nonsense that I’ve seen so far.
Do we want woke AI, X brand fash-pilled AI, CCPBot, or Emirates Bot? The possibilities are endless.
I suspect there will be at least continued commercial use of the current tech, though I still suspect this crop is another dead end in the hunt for AGI.
Once vested, RSUs are the same as regular stock purchased through the market. The company cannot claw them back, nor do they "expire".
They got completely outsmarted and out maneuvered by Sam Altman
And they think they will be able to align a super human intelligence? That it won’t outsmart and out maneuver them easier than Sam Altman did.
They are deluded!
Doesn’t mean that that’s legal, of course, but I’d doubt that the legality would hinge on a lack of consideration.
> same as regular stock purchased through the market
You cannot purchase stock of a private company on the open market.
> The company cannot claw them back
The company cannot "claw back" a vested RSU but they can cancel it.
> nor do they "expire".
Yes, they absolutely do expire. Read your employment contract and equity grant agreement carefully.
Right. In the case of OpenAI, their equity grant contracts likely have a non-disparagement clause that allows them to cancel vested shares. Whether or not you think that is a "valid reason" is largely independent of the legal framework governing RSU release.
Perhaps as an example of the blurred line; Pre-nup agreements sprung the day of the wedding, will not hold up in a US court with a competent lawyer challenging them.
You can try to call it 'economic' duress but any non-sociopath sees there are other factors at play.
I have seen a lot of companies put unenforceable stuff into their employment agreements, separation agreements, etc.
https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...
And here is a more detailed explanation:
After all, at this point, OpenAI:
- Is not open with models
- Is not open with plans
- Does not let former employees be open.
It sure does give us a glimpse into the Future of how Open AI will be!
High levels (especially if they were board/exec level) will often have additional obligations on top of rank and file.
From the article:
“““
It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
”””
[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...
> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
I'm not making any statement about the morality, just that this is not a 1a issue.
Keep building your disruptive, game-changing, YC-applicant startup on the APIs of this sociopathic corporation whose products are destined to destroy all trust humans have in other humans so that everyone can be replaced by chatbots.
It's all fine. Everything's fine.
> They don't share the equivalent of a Cap Table with employees, so there's no way to tell what sort of ownership interest a PPU represents
It is known - it represents 0 ownership share. They do not want to sell any ownership because their deal with MS gives MS 49% ownership and they don't want MS to be able to buy up additional stake and control the company.
> And of course, it's unlikely OpenAI will ever turn a profit (which if they did would be capped anyway). So this is all just play money anyway.
Putting aside your unreasonable confidence that OAI will never be profitable, the PPUs are tender offered so they can be sold to institutional investors up to a very high limit, OAIs current tender offer round values them at ~$80b iirc
Do you really believe this or is it just hyperbole?
Idk. Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.
So yes, the insiders very likely know a thing or two that the rest of us don’t.
What's one more hyperbole?
Edit to add, provocatively but not sarcastically: next time you hear some AI-proponent-who-used-to-be-a-crypto-proponent roll out the "but aren't we all just LLMs, in essence?" justification for their belief that ChatGPT may have broad understanding, ask yourself: are they not just self-soothing over their part in mass job losses with a nice faux-scientific-inevitability bedtime story?
I know many people on this site will not like what I am about to write as Sam is worshiped but let's face it: The head of this company is a master scammer who will do everything under the sun and the moon to earn a buck, including but notwithstanding to destroying himself along with his entire fortune if necessary in his quest of making sure other people don't get a dime;
So far he has done it all it: attempt to regulatory capture, hostile take over as the CEO, thrown out all other top engineers and partners and ensured the company remains closed despite its "open" name.
Now he is simply attempting to tie up all the loos ends and ensuring his employees remain loyal and are kept on a tight leash. It's a brilliant strategy, preventing any insider from blowing the whistle should OpenAI ever decides to do anything questionable, such as selling AI capabilities to hostile governments.
I simply hope that open source wins this battle so that we are not all completely reliant on OpenAI for the future, despite Sam's attempt.
It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.
The most obvious reason is costs - if it costs many millions to train foundation models, they don't have a ton of experiments sitting around on a shelf waiting to be used. They may only get 1 shot at the base-model training. Sure productization isn't instant, but no one is throwing out that investment or delaying it longer than necessary. I cannot fathom that you can train an LLM at like 1% size/tokens/parameters to experiment on hyper parameters, architecture, etc and have a strong idea on end-performance or marketability.
Additionally, I've been part of many product launches - both hyped up big-news-events and unheard of flops. Every time, I'd say that 25-50% of the product is built/polished in the mad rush between press event and launch day. For an ML Model, this might be different, but again see above point.
Sure products may be planned month/years out, but OpenAI didn't even know LLMs were going to be this big a deal in May 2022. They had GPT-2 and GPT-3 and thought they were fun toys at that time, and had an idea for a cool tech demo. I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.
Personally I'm not seeing that the path we're on leads to whatever that is, either. But I think/hope I'll know if I'm wrong when it's in front of me.
What's the consideration for this contract?
https://nitter.poast.org/janleike/status/1791498174659715494
No need to fret over the harm to future innovation when I innovation is an industrial product.
If I remember correctly the author unsuccessfully tried to get that purged from the Internet
They will have many successes in the short run, but, their long run future suddenly looks a little murky.
Speculation: maybe the options they earn when they work there have some provision like this. In return for the NDA the options get extended.
A lot of people got screwed along the way
Are there any?
>” The Silenced No More Act bans confidentiality provisions in settlement agreements relating to the disclosure of underlying factual information relating to any type of harassment, discrimination or retaliation at work”
In fact both of those seem quite bad, both by regular industry standards, and even moreso as applied to OpenAI's specific situation.
Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.
https://en.wikipedia.org/wiki/Peppercorn_(law)
There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.
They can totally deal with appearing petty and thin-skinned.
(I should reiterate that I actually wrote "serious, possibly criminal")
Turns out they're right, they can put whatever they want in a contract. And again, they are correct that their wage slaves will 99.99% of the time sign whatever paper he pushes in front of them while saying "as a condition of your continued employment, [...]".
But also it turns out that just because you signed something doesn't mean that's it. My friends (all of us young twenty-something software engineers much more familiar with transaction isolation semantics than with contract law) consulted with an attorney.
The TLDR is that:
- nothing in contract law is in perpetuity
- there MUST be consideration for each side (where "consideration" means getting something. something real. like USD. "continued employment" is not consideration.)
- if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.
- and when it comes to employers and employees, the employee had damn well better be getting a good deal out of it, especially if you are trying to prevent the employee (or ex-employee) from working.
A common pattern ended up emerging: our employer would put something perpetual in the contract, and offer no consideration. Our attorney would tell us this isn't even a valid contract and not to worry about it. Employer would offer an employee some nominal amount of USD in severance and put something in perpetuity into the contract. Our attorney tells us the judge would likely use "blue ink rule" to add in "for a period of one year", or, it would be prorated based on the amount of money they were given relative to their former salary.
(I don't work there anymore, naturally).
Makes you wonder what misdeeds they’re trying so hard to hide.
I think it may time for something like this: https://www.openailetter.org/
My guess would be that YC founders like sama have some sort of special power to slap down comments that they feel are violating HN discussion guidelines.
Now it’s a money grab.
Sad because some amazing tech and people now getting corrupted into a toxic culture that didn’t have to be that way
Granted, that might be most of the profit they have made, but still, they're probably at at least 0.7T$ so far. I bet they'll break $1T eventually.
Well, that would obviously depend on the terms of the contract, but I would be astonished if the people who wrote it didn't consider that possibility. It's pretty trivial to calculate the monetary value of equity, and if they feel entitled to that equity, they surely feel entitled to its cash equivalent.
Also, what if you can't sell? Selling is at their discretion. They can prevent you from selling some of your so-called "equity" to keep you on their leash as long as they want.
> PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M
> The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.
This NDA wrinkle is another negative. Honestly I think the entire OpenAI compensation model is smoke and mirrors which is normal for startups and obviously inferior to RSUs.
It's quite natural, that a co-founder, being forced out of the company wouldn't be exactly willing to forfeit his equity. So, what, now he cannot… talk? That has some Mexican cartel vibes.
If it was "we'll give you shares/cash if you don't say anything bad about us", that's normal, kind of standard fare for exit agreements, it's why severance packages exist.
But if it is "we'll take away the shares that you already earned as part of your regular employment compensation unless you agree to not say anything bad about us", that's extortion.
Cryptography is a prime example. Any time any company is the tiniest bit cagey or obfuscates any aspect, I default to assuming that they’re either selling snake oil or have installed NSA back doors. I’ll claim this openly, as a fact, until proven otherwise.
For example, you may join a company and be given options to buy 10,000 shares at $5 each with a 2 year vesting schedule. They may begin vesting immediately, meaning you can buy 1/24th of the total options each month (or 614 shares). Its also common for a delay up front where no options vest until you've been with the company for say 6 or 12 months.
Until an option vests you don't own anything. Once it vests, you still have to buy the shares by exercising the option at the $5 per share price. When you leave, most companies have a deadline on the scale of a few months where you have to either buy all vested shares or forfeit them and lose the stock options.
whether people should be able to hold on to that billion is a different question
i know for a fact that these bits are inaccurate, but i don't want to go into the details.
the profit share is not known but you are told what the PPUs were valued at the most recent tender offer
Care to explain? Absurd how? An internal contradiction somehow? Unimportant for some reason? Impossible for some reason?
How can I be confident you aren't committing the fallacy of collecting a bunch of events and saying that is sufficient to serve as a cohesive explanation? No offense intended, but the comment above has many of the qualities of a classic rant.
If I'm wrong, perhaps you could elaborate? If I'm not wrong, maybe you could reconsider?
Don't forget that alignment research has existed longer than OpenAI. It would be a stretch to claim that the original AI safety researchers were using the pretexts you described -- I think it is fair to say they were involved because of genuine concern, not because it was a trendy or self-serving thing to do.
Some of those researchers and people they influenced ended up at OpenAI. So it would be a mistake or at least an oversimplification to claim that AI safety is some kind of pretext at OpenAI. Could it be a pretext for some people in the organization, to some degree? Sure, it could. But is it a significant effect? One that fits your complex narrative, above? I find that unlikely.
Making sense of an organization's intentions requires a lot of analysis and care, due to the combination of actors and varying influence.
There are simpler, more likely explanations, such as: AI safety wasn't a profit center, and over time other departments in OpenAI got more staff, more influence, and so on. This is a problem, for sure, but there is no "pearl clutching pretext" needed for this explanation.
Even lowest level fast food workers can choose a different employer. An engineer working at OpenAI certainly has a lot of opportunities to choose from. Even when I only had three years in the industry, mid at best, I asked to change the contract I was presented with because non-compete was too restrictive — and they did it. The caliber of talent that OpenAI is attracting (or hopes to attract) can certainly do this too.
As for 'invalid because no consideration' - there is practically zero probability OpenAI lawyers are dumb enough to not give any consideration. There is a very large probability this reporter misunderstood the contract. OpenAI have likely just given some non-vested equity, which in some cases is worth a lot of money. So yeah, some (former) employees are getting paid a lot to shut up. That's the least unique contract ever and there is nothing morally or legally wrong with it.
Employees and employer enter into an agreement: Work here for X term and you get Y options with Z terms attached. OK.
But, then later pulling Darth Vader… “Now that the deal is completing, I am changing the deal. Consent and it’s bad for you this way. Don’t consent and it’s bad that way. Either way, you held up your end of our agreement and I’m not.”
Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.
I got laid off by a different company and can't disparage them. I can tell the truth. I'm not signing anything that requires me to lie.
Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.
You can't just sign a contract and then not uphold your end of the bargain after you've got the benefit you want. You'll (rightfully) get sued.
The vast majority of datacenters currently in production will be entirely powered by carbon free energy. From best to worst:
1. Meta: 100% renewable
2. AWS: 90% renewable
3. Google: 64% renewable with 100% renewable energy credit matching
4. Azure: 100% carbon neutral
[1]: https://sustainability.fb.com/energy/
[2]: https://sustainability.aboutamazon.com/products-services/the...
[3]: https://sustainability.google/progress/energy/
[4]: https://azure.microsoft.com/en-us/explore/global-infrastruct...
But none of this means the company can just cancel your RSUs unless you agreed to them being cancelled for specific reason in your equity agreement. I have worked at several big pre-IPO companies that had big exits. I made sure there were no clawback clauses in the equity contract before accepting the offers.
IMO, we should pause this for now and put these resources (human and capital) towards reducing the impact of global warming.
Wish I could say I would have been that strong. Many would not disparage a company they hold equity in, unless they went full baby genocide.
The argument about LLMs not being copyright laundromats making sense hinges the scale and non-specificity of training. There's a difference between "LLM reproduced this piece of copyrighted work because it memorized it from being fed literally half the internet", vs. "LLM was intentionally trained to specifically reproduce variants of this particular work". Whatever one's stances on the former case, the latter case would be plain infringing copyrights and admitting to it.
In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.
Ta-da.
So I think an argument can be made that NDAs and similar agreements should not be enforceable by courts.
See Shelley v. Kraemer
You would think having a massive scale just means it has infringed even more copyrights, and therefore should be in even more hot water
See, they aren't distributing the words, and good luck proving that any specific words went into training the model.
Worse if it is related to training future super intelligence to kill people. Killer drones are possible even with today's technology without AGI.
My point isn't to argue merits of that case, it's just to point out that OP's joke is like a stereotypical output of an LLM: seems to make sense, but really doesn't.
"I agree to follow unspecified terms in perpetuity, or return the pay I already earned" doesn't vibe with labor laws.
And if those NDA terms were already in the contract, there would be no need to sign them upon exit.
That’s neither efficient nor optimized, just a bogeyman for “doesn’t work”.
Incidentally, that's what Grigory Perelman, the mathematician that rejected the Fields Medal and the $1M prize that came with it, did.
It wasn't a matter of an NDA either; it was a move to make his message heard (TL;DR: "publish or perish" rat race that the academia has become is antithetical to good science).
He was (and still is) widely misunderstood in that move, but I hope people would see it more clearly now.
The enshittification processes of academic and corporate structures are not entirely dissimilar, after all, as money is at the core of corrupting either.
That’s not how it works. It doesn’t matter if you write the words yourself or have an agent write them for you. In either case, it’s the communication of the covered information that is proscribed by these kinds of agreements.
This is the kind of thing a cult demands of its followers, or an authoritarian government demands of its citizens. I don't know why people would think it's okay for a business to demand this from its employees.
Only thanks to a recent ruling by the FTC that non-competes are valid. in the most egregious uses, bartenders and servers were prohibited from finding another job in the same industry for two years.
Basically, we need our open source version of Glassdoor as a LLM ?
Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.
Once again, we see the difference between the public narrative and the actions in a legal context.
If the NDA terms were agreed in an employment contract they would no longer be valid upon termination of that contract.
OP wants to achieve effects of specific accusation using only non-specific means; that's not easy to pull off.
It’s the Wild West. The lack of a court case has no bearing on whether or not what they’re doing is right or wrong.
Forced myself through some parts of it and all I can get is people don’t know what they want so it would be nice to build an oracle. Yeah, I guess.
1) the purpose and character of use.
2) the nature of the copyrighted material.
3) the *amount* and *substantiality* of the portion taken, and.
4) the effect of the use upon the *potential market*.
So in that regard, if you're training a personal assistance GPT, and use some software code to teach your model logic, that is easy to defend as fair use.
But the extent of use matters, and if you're training an AI for the sole purpose of regurgitating specific copyrighted material, it is infringement, if it is copyrighted, but in this case, it is not copyright issue, it is contracts and NDAs.
In the OpenAI case, the gesture of "forgoing millions of dollars" directly makes you able to do something you couldn't - speak about OpenAI publicly. In the Grigory Perelman case, obviously the message was far less clear to most people (I personally have heard of him turning down the money before and know the broad strokes of his story, but had no idea that that was the reason).
What are your timelines here? "Catastrophic" is vague but I'd put the climate change meaningfully affecting the quality of life of average westerner at end of century, while AGI could be before the middle of the century.
“What was the company culture like?” “Etc. platitude so on and so forth”
“And I heard the CEO was a total dickbag. Was that your experience working with him?” “I don’t have anything to add on that subject”
Of course going back and forth on that won’t really work but to different people you can’t be expected to not say the nice things and then someone could build up a story based on that.
1)OpenAI wouldn't want the negative PR of pursuing legal action against someone top in their field; his peers would take note of it and be less willing to work for them.
2)The stuff he signed was almost certainly different from what rank and file signed, if only because he would have sufficient power to negotiate those contracts.
Which is why creating a new type of intelligent entity that could be more powerful than humans is a very bad idea: we don't even know how to align the humans and we have a ton of experience with them
Hmmmn. Most of the humans where I work do things physically with their hands. I don't see what AI will achieve in their area.
Can AI paint the walls in my house, fix the boiler and swap out the rotten windows? If so I think a subscription to chat GPT is very reasonably priced!
But how is that even possible when corporations are typically run by ghouls who enjoy relativistic morals when it suits them. And are beholden to profits, not ethics.
Wait that's a thing? Can you give more detail about this/what to look into to learn more?
If imaginary cloud provider "ZFQ" uses 10MW of electricity on a grid and pays for it to magically come from green generation, that means 10MW of other loads on the grid were not powered by green energy, or 10MW of non-green power sources likely could have been throttled down/shut down.
There is no free lunch here; "we buy our electricity from green sources" is greenwashing bullshit.
Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads. By buying so many solar panels in such quantities, they affect availability and pricing of all those components.
The US, for example, has about 5GW of solar manufacturing capacity per year. NVIDIA sold half a million H100 chips in one quarter, each of which uses ~350W, which means in a year they're selling enough chips to use 700MW of power. That does not include power conversion losses, distribution, cooling, and the power usage of the host systems, storage, networking, etc.
And that doesn't even get into the water usage and carbon impact of manufacturing those chips; the IC industry uses a massive amount of water and generates a substantial amount of toxic waste.
It's hilarious how HN will wring its hands over how much rare earth metals a Prius has and shipping it to the US from Japan, but ask about the environmental impacts of AI and it's all "pshhtt, whatever".
If you're only training on a handful of works then you're taking more from them, meaning it's not de minimus.
For the record, I got this legal theory from Cory Doctorow[0], but I'm skeptical. It's very plausible, but at the same time, we also thought sampling in music was de minimus until the Second Circuit said otherwise. Copyright law is extremely malleable in the presence of moneyed interests, sometimes without Congressional intervention even!
[0] who is NOT pro-AI, he just thinks labor law is a better bulwark against it than copyright
If there is something unenforceable about these contracts, we have the court system to settle these disputes. I’m tired of living in a society where everyone’s dirty laundry is aired out for everyone to judge. If there is a crime committed, then sure, it should become a matter of public record.
Otherwise, it really isn’t your business.
Large language models are not "smart". They do not have thought. They don't have intelligence despite the "AI" moniker, etc.
They vomit words based off very fancy statistics.
There is no path from that to "thought" and "intelligence."
Yes, but:
(1) OpenAI salaries are not low like early stage startup salaries. Essentially these are highly paid jobs (high salary and high equity) that require an NDA.
(2) Apple has also clawed back equity from employees who violate NDA. So this isn't all that unusual.
Releasing an LLM trained on company criticisms, by people specifically instructed not to do so is transparently violating the agreement.
Because you're intentionally publishing criticism of the company.
This thread is full of comments making statements around this looking like some level of criminal enterprise (ranging from “no way that document holds up” to “everyone knows Sam is a crook”).
The level of stuff ranging from vitriol to overwhelming if maybe circumstantial (but conclusive that my personal satisfaction) evidence of direct reprisal has just been surreal, but it’s surreal in a different way to see people talking about this like it was never even controversial to be skeptical/critical/hostile to thing thing.
I’ve been saying that this looks like the next Enron, minimum, for easily five years, arguably double that.
Is this the last straw where I stop getting messed around over this?
I know better than to expect a ticker tape parade for having both called this and having the guts to stand up to these folks, but I do hold out a little hope for even a grudging acknowledgment.
Does it really take millions dollars of compute to add additional training data to an existing model?
Plus, we're talking about employees that are leaving / left anyway.
>Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.
Excellent. That means plausible deniability.
Surely all those horror stories about unethical behavior are just hallucinations, no matter how specific they are.
Absolutely no reason for anyone to take them seriously. Which is why the press will not hesitate to run with that, with appropriate disclaimers, of course.
Seriously, you seem to think that in a world where numbers about death toll in Gaza are taken verbatim from Hamas without being corroborated by other sources, an AI model output will not pass the test of public scrutiny?
Very optimistic of you.
The word-probabilities are transformative use, a form of fair use and aren't an issue.
The specific output at each point in time is what would be judged to be fair use or copyright infringing.
I'd argue the user would be responsible for ensuring they're not infringing by using the output in a copyright infringing manner i.e. for profit, as they've fed certain inputs into the model which led to the output. In the same way you can't sue Microsoft for someone typing up copyrighted works into Microsoft Word and then distributing for profit.
De minimus is still helpful here, not all infringments are noteworthy.
These were profit sharing units vs options.
But if your job is mostly sitting at a computer, I would be a bit worried.
If you've been working on AI, you've seen everything go up and to the right for a while - who really benefits from pointing out that a slowdown is occurring? Who is incentivized to talk about how the benefits from scaling are slowing down or the publicly available internet-scale corpuses are running out? Not anyone who trains models and needs compute, I can tell you that much. And not anyone who has a financial interest in these companies either.
It's up to you if that counts as "a handful" or not.
Based on what? This isn't any legal argument that will hold water in any court I'm aware of
So they probably won't.
Everyone including the board's own chosen replacements for Altman siding with Altman seems to me to not be compatible with his current leadership being the root cause of the current discontent… so I'm blaming Microsoft, who were the moustache-twirling villains when I was a teen.
Of course, thanks to the NDAs hiding information, I may just be wildly wrong.
If OpenAI and ChatGPT is so far ahead for everyone else, and their product is so complex, it doesn't matter what a few disgruntled employees do or say, so the rule is not required.
If we take math or computer science for example: some very important algorithms can be compressed to a few bits of information if you (or a model) have a thorough understanding of the surrounding theory to go with it. Would it not amount to IP infringement if a model regurgitates the relevant information from a patent application, even if it is represented by under a kilobyte of information?
I’d say there is a lot of available money in replacing blue collared jobs with AI-powered robots. Even if they do crap, it’s still better quality that contractors.
>...
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
From OpenAI's charter: https://openai.com/charter/
Now read Jan Leike's departure statement: >>40391412
That's why this is everyone's business.
Equity adds a wrinkle here, but I suspect if the effect of canceling equity is to cause a forfeiture of earned wages, then ultimately whatever contract is signed under that threat is void.
No. Renewable energy capacity is often built out specifically for datacenters.
> Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads.
No. This capacity would never never have been built out to begin with if it was not for the data center.
> By buying so many solar panels in such quantities, they affect availability and pricing of all those components.
No. Renewable energy gets cheaper with scale, not more expensive.
> which means in a year they're selling enough chips to use 700MW of power.
There are contracts for renewal capacity to be built out or well into the gigawatts. Furthermore, solar is not the only source of renewable energy. Finally, nuclear energy is also often used.
> the IC industry uses a massive amount of water
A figurative drop in the bucket.
> It's hilarious how HN will wring its hands
HN is not a monolith.
Employment Contract the First:
We are paying you (WAGE) for your labor. In addition you also will be paid (OPTIONS) that, after a vesting period, will pay you a lot of money. If you terminate this employment your options are null and void unless you sign Employment Contract the Second.
Employment Contract the Second:
You agree to shut the fuck up about everything you saw at OpenAI until the end of time and we agree to pay out your options.
Both of these have consideration and as far as I'm aware there's nothing in contract law that requires contracts to be completely self-contained and immutable. If two parties agree to change the deal, then the deal can change. The problem is that OpenAI's agreements are specifically designed to put one counterparty at a disadvantage so that they have to sign the second agreement later.
There is an escape valve in contract law for "nobody would sign this" kinds of clauses, but I'm not sure how you'd use it. The legal term of art that you would allege is that the second contract is "unconscionable". But the standard of what counts as unconscionable in contract law is extremely high, because otherwise people would wriggle out of contracts the moment that what seemed like favorable terms turned unfavorable. Contract law doesn't care if the deal is fair (that's the FTC's job), it cares about whether or not the deal was agreed to.
What made you think it was the next Enron five years ago?
I appreciate you having the guts to stand up to them.
I find it hard to understand that in a country that tends to take freedom of expression so seriously (and I say this unironically, American democracy may have flaws but that is definitely a strength) it can be legal to silence someone for the rest of their life.
But most people don't want to live in permanent mental distress due to shame of past action or fear of rebellion, I guess.
There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.
[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...
But the line between healthy and unlawful transgression can be a thin line
That's easy, we just need to make meatspace people stupider. Seems to be working great so far.
Making people believe that anything but their own body and mind can be considered part of their own properties is stealing their lucidity.
Honestly? I'm not too worried
We've seen how the google employee that was "seeing a conscience" (in what was basically GPT-2 lol) was a nothing burger
We've seen other people in "AI Safety" overplay their importance and hype their CV more than actually do any relevant work. (Usually also playing the diversity card)
So, no, AI safety is important but I see it attracting the least helpful and resourceful people to the area.
TL;DR train a seed AI to guess what humans would want if they were "better" and do that.
Hey hey hey! Sam founded a 4th most popular social networking site in 2005 called Loopt. Don't you forget that! (After that he joined YC and founded nothing ever since)
I’ve never been a YC-funded founder myself, but I’ve had multiple roommates who were, and a few girlfriends who were on the bubble of like, founder and early employee, and I’ve just generally been swimming in that pool to one degree or another for coming up on 20 years (I always forget my join date but it’s on the order of like, 17 years or something).
So when a few dozen people you trust tell you the same thing, you tend to buy it even if you’re not quite ready to print the worst hearsay (and I’ve heard things about Altman that I believe but still wouldn’t print without proof, dark shit).
As the litany of scandals mounted (Green Dot, zero-rated pre-IPO portfolio stock with like, his brother involved, Socialcam, the list just goes on), and at some point real journalists start doing pieces (New Yorker, etc.).
And while some of my friends and former colleagues (well maybe former friends now) who joined are both eminently qualified and as ethical as this business lets anyone be, there was a skew there too, it skewed “opportunist, fails up”.
So it’s a growing preponderance of evidence starting in about 2009 and being just “published by credible journalists”starting about five years later, at some point I’m like “if even 5% of this is even a little true, this is beyond the pale”.
It’s been a gradual thing, and people giving the benefit of the doubt up until the November stuff are maybe just really charitable, at this point it’s like, only a jury can take the next steps trivially indicated.
Well... I know first hand that many well-informed, tech-literate people still think that all products from OpenAI are open-source. Lying works, even in that most egregious of fashion.
That being said, the GP you’re talking about made no such statement whatsoever.
But how much do you need? Sell half, forgo the rest, and you'll be fine.
Who would sign a contract to willfully give away their options?
Training an LLM with the intent of contravening an NDA is just plain <intent to contravene an NDA>. Everyone would still get sued anyway.
I would think if I can recognize exactly what song it comes from - not de minimus.
> LLMs not being copyright laundromats
This a brilliant phrase. You might as well put that into an Emacs paste macro now. It won't be the last time you will need it. And the OP is classic HN folly where programmer thinks laws and courts can be hacked with "this one weird trick".Unfortunately Orwellian propoganda works.
I think this is all still compatible with saying that ingesting an entire book is still:
> If you're taking a handful of word probabilities from every book ever written, then the portion taken from each work is very, very low
(Though I wouldn't want to make a bet either way on "so courts aren't likely to care" that follows on from that quote: my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation).
> Collateralised Copyright Liability
Is this a real legal / finance term or did you make it up?Also, I do not follow you leap to compare LLMs to CDOs (collateralised debt obligations). And, do you specifically mean CDO or any kind of mortgage / commercial loan structured finance deal?
Because that's the distinction being argued here: it's "a handful"[0] of probabilities, not the complete work.
[0] I'm not sold on the phrasing "a handful", but I don't care enough to argue terminology; the term "handful" feels like it's being used in a sorites paradox kind of way: https://en.wikipedia.org/wiki/Sorites_paradox
The process involved in obtaining that end work is completely irrelevant to any copyright case. It can be a claim against the models weights (not possible as it's fair use), or it's against the specific once off output end work (less clear), but it can't be looked at as a whole.
Copyright has fair uses clauses, endless court decisions limiting its use, carve outs for libraries, additional junk like the DMCA and more slapped on top. It's a patchwork of dozens of treaties and laws, spanning hundreds of years.
For example, you can read a book to a room full of kids, you can use copyright materials in comedic skits, you can quote snippets, the list goes on. And again, this is all legislated.
The point? It's complex, and specific usage of copyrighted works infringing or not, can be debatable without intent immediately being malign.
Meanwhile, an NDA covers far, far more than copyright. It may cover discussion and disclosure of everything or anything, including even client lists, trade secrets, work processes, and more. It is signed, and agreed to by both parties involved. Equating "copyright law" to "an NDA" is a non-starter. There's literally zero legal parallel or comparison here.
And as others have mentioned, the intent of the act would be malicious on top of all of this.
I know a lot of people dislike the whole data snag by OpenAI, and have moral or ethical objections to closed models, but thinking anyone would care about this argument if you breach an NDA is a bad idea. No judge would even remotely accept or listen to such chicanery.
More generally, we tend to view number of causalities in war as a large number, and not as the sum of every tragedies that it represent and that we perceive when fewer people die.
I meant of the employees, obviously not the board.
Also excluded: all the people who never worked there who think Altman is weird, Elon Musk who is suing them (and probably the New York Times on similar grounds), and the protestors who dropped leaflets on one of his public appearances.
> and all of those who’ve left the company?
Happened after those events; at the time it was so close to being literally employee who signed the letter saying "bring Sam back or we walk" that the rest can be assumed to have been off sick that day even despite the reputation the US has for very limited holidays and getting people to use those holidays for sick leave.
> It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company.
Obviously so, I'm only asserting that this doesn't appear to be due to Altman, despite him being CEO.
("Appear to be" is of course doing some heavy lifting here: unless someone wants to literally surveil the company and publish the results, and expect that to be illegal because otherwise it makes NDAs pointless, we're all in the dark).
If you were a director at a game company and needed art in that style, it would be cheaper to have the AI do it instead of buying from the artist.
I think this is currently an open question.
As my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation, I don't trust my own beliefs about the law.
https://www.nytimes.com/2023/12/27/business/media/new-york-t... https://www.reuters.com/legal/us-newspapers-sue-openai-copyr... https://www.washingtonpost.com/technology/2024/04/09/openai-...
Some decided to make deals instead
Uber is no longer subsidized (or even cheap) in most places, it's just an app for summoning taxis and overpriced snacks. AirBnB is underregulated housing for nomads at this point.
Your examples sorta prove the point - they didn't succeed in what they aimed at doing, so they pivoted until the law permitted it.
It's better and quicker search at present for the area I specialise in.
It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.
What's your take?
no one building this software wants to “steal from creators” and the legal precedent for using copyrighted works for the purpose of training is clear with the NYT case against open AI
It’s why things like the recent deal with Reddit to train on their data (which Reddit owns and users give up when using the platform) are becoming so important, same with Twitter/X
https://www.federalregister.gov/documents/2023/03/16/2023-05...
So I think the law, at least as currently interpreted, does care about the process.
Though maybe you meant as to whether a new work infringes existing copyright? As this guidance is clearly about new copyright.
See the assassination attempts on president Jackson.
I also work hard not to print gossip and hearsay (I try not to even mention so much as a first name, I think I might have slipped one or twice on that though not in connection with an accusation of wrongdoing), there’s more than enough credible journalism to paint a picture, any person whose bias (and I have my own but it’s not like, over being snubbed for a job or something it’s a philosophical/ethical/political agenda) has not utterly robbed them of objectivity can acknowledged that “this looks really bad and worse all the time” on the basis of purely public primary sources and credible journalism.
I think some of the inside baseball I try very hard not to put in writing might be what cranks it up to “people are doing time”.
I’ve caught more than a little “less than a great time” over being a vocal critic, but I’m curious if having gone pretty far down the road and saying something is rotten, why you’d declare a willingness to defy a grand jury or a judge?
I’ve never been in court, let alone held in contempt, but I gather it’s fairly hard time to openly defy a judge.
I have friends I’d go to jail for, but not very many and none who work at OpenAI.
What is the difference OpenAI has that lets them get away with, but not our hypothetical Mr. Smartass doing the same process trying to get around an NDA?
We do not have that. The cost of energy is mis-priced, although we are limping our way to fixing that.
Paying the likely fair cost for our goods, will probably kill a lot of current industries - while others which are currently viable, will become viable.
Whether the brazenness with which they are doing this will work out for them is currently playing out in the courts.
I agree with a majority of points you made. Exception is to this
> A figurative drop in the bucket.
Fresh water sources are limited. Fabs water demands and pollution are high impact.
Calling a drop in the bucket comes in the weasel words category.
We still need fabs, because we need chips. Harm will be done here. However, that is a cost we, as a society, will choose to pay.
Who created the work, it's the user who instructed the AI (it's a tool), you can't attribute it to the AI. It would be the equivalent of Photoshop being attributed as co-author on your work.
The user is "inputting variables into their probability algorithm that's resulting in the copyright work".
Not fully accurate. Indeed there is renewable energy that is produced exclusively for the datacenter. But it is challenging to rely only on renewable energy (because it is intermittent and electricity is hard to store at scale so often you need to consume electricity when produced). So what happens in practice is that the electricity that does not come from dedicated renewable capacity is coming from the grid/network. What companies do is that they invest in renewable capacity in the network so that "the non renewable energy that they consume at time t (because not enough renewable energy available at that moment) is offsetted by someone else consuming renewable energy later". What I am saying here is not pure speculation, look at the link to meta website, they are saying themselves that this is what they are doing
> It’s why things like the recent deal[s ...] are becoming so important
Sorry but I don't follow. Is it one or the other?
If they didn't want to steal from the original authors, why do they not-steal Reddit now? What happens with the smaller creators that are not Reddit? When is OpenAI meeting with me to discuss compensation?
To me your post felt something like "I'm not robbing you, Small State Without Defense that I just invaded, I just want to have your petroleum, but I'm paying Big State for theirs cause they can kick my ass".
Aren't the recent deals actually implying that everything so far has actually been done with the intent of not compensating their source data creators? If that was not the case, they wouldn't need any deals now, they'd just continue happily doing whatever they've been doing which is oh so clearly lawful.
What did I miss?
It's not merely a compressed version of a song intended to be used in the same way as the original copyright work, this would be copyright infringement.
Would any non-corrupt judge consider this is done in bad fait?
How is this difference if we use a great ancient sea turtles—or some other long-lived organism—instead of the current royal family baby? Like, I guess my point is anything that would likely outlive the employee basically?
An ai-enhanced Photoshop, however, could do wonders though as the base capabilities seem to be mostly there. Haven't used any of the newer ai stuff myself but https://www.shruggingface.com/blog/how-i-used-stable-diffusi... makes it pretty clear the building blocks seem largely there. So my guess is the main disconnect is in making the machines understand natural language instructions for how to change the art.
Plus all self-driving lies and more lies well within fraud territory at this point. Not even going into his sociopathic personality, massive childish ego and apparent 'daddy issues' which in men manifest exactly like him. He is not in day-to-day SpaceX control and it shows.
Since so many people took time to put him down there here can anybody provide some explanation to me? Preferably not just about how closed openai is, but specifically about Sam. He is in a pretty powerful position and maybe I'm missing some info.
They tend to try argue for conspiracy to commit copyright infringement, it's a tenuous case to make unless they can prove that was actually their intention. I think in most cases it's ISP/hosting terms and conditions and legal costs that lead to their demise.
Your example of the model asking specifically "what copyrighted content would you like to download", kinda implies conspiracy to commit copyright infringement would be a valid charge.
And there are advantages to exercising: many (most?) companies take back unexercised shares a few weeks/months after you leave, it kicks in a CGT start date, so you can end up paying a lower CGT tax when you eventually sell
You need to understand all this stuff before you make a choice that's right for you
Perfect! So it's so incredibly overreaching that any judge in California would deem the entire NDA unenforceable..
Either that or, in your effort to overstate a point, you exaggerated in a way that undermines the point you were trying to make.
Bear in mind there are actually three options, one is signing the second contract, one is not signing, and the other is remaining an employee.
https://www.reddit.com/r/OpenAI/comments/1804u5y/former_open...
1. If he didn't turn down the money, you wouldn't have heard of him at all;
2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.
3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.
Quote (from [1]):
From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*
This explanation is confusing only to someone who has never tried to get a tenured position in academia.
Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.
He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.
Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.
I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).
[1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...
pg calls sama ‘naughty’. I call him ‘dangerous’.
This would be not only unethical viewed in Germany, i could see how a CEO would go to prison for such a thing.
I know a manager for an EV project at a big German auto company who also had to sign one when he was let go and was compensated handsomely to keep quiet and not say a word or face legal consequences.
IIRC he got ~12 months wages. After a year of not doing anything at work anyway. Bought a house in the south with it. Good gig.
(though most likely the NDA and everything is there from day 1 and there's no second contract, no?)
A case where it obviously makes sense is something like a covenant between two companies; whose life would be relevant there, if both parties want the contract to last a long time and have to pick one? The CEOs? Employees? Shareholders? You could easily have a situation where the company gets sold and they all leave, but the contract should still be relevant, and now it depends on the lives of people who are totally unconnected to the parties. Just makes things difficult. Using a monarch and his currently living descendants is easy.
I'm not sure how relevant it is in a more employer employee context. But it's a formalism to create a very long contract that's easy to track, not a secret trick to create a longer contract than you're normally allowed to. An employer asking an employee to agree to it would have no qualms asking instead for it to last the employee's life, and if the employee's willing to sign one then the other doesn't seem that much more exploitative.
Are employees being mislead about the contract terms at time of signing the contract? Because, obviously, the original contract needs to have some clause regarding the equity situation, right? We can not just make that up at the end. So... are we claiming fraud?
What I suspect is happening, is that we are confusing an option to forgo equity for an option to talk openly about OpenAI stuff (an option that does not even have to exist in the initial agreement, I would assume).
Is this overreach? Is this whole thing necessary? That seems besides the point. Two parties agreed to the terms when signing the contract. I have a hard time thinking of top AI researchers as coerced to take a job at OpenAI or unable to understand a contract, or understand that they should pay someone to explain it to them – so if that's not a free decision, I don't know what is.
Which leads me to: If we think the whole deal is pretty shady – well, it took two.
You know what AI is actually gonna be useful for? AR source attachments to everything that comes out of our monkey mouths, or a huge floating [no source] over someone's head.
Realtime factual accuracy checking pls I need it.
…no one “started from scratch", the sum of all knowledge is built on prior foundations.
Let’s also be clear that making deals with Reddit isn’t stealing from creators, it’s not a platform where you own what you type in, same on here this is all public domain with no assumed rights to the text. If you write a book and openAI trains on it and starts telling it to kids at bed time, you 100% will have a legal claim in the future, but the companies already have protections in place to prevent exactly that. For example if you own your website you can request the data not be crawled, but ultimately if your text is publicly available anyone is allowed to read it, and the question it is anyone allowed to train AI on it is an open question that companies are trying to get ahead on.
Its quite common for companies to put tons of extremely restrictive terms in an NDA they can't actually legally enforce to scare off potential future ex-employees from creating a problem.
GPT can't get retroactively untrained on stolen data.
It's a common mistake on here to assume that for every decision there are equally good other options. Also, the fact that they feel the need to enforce silence so strongly implies at least a little that they have something to hide.
Delusional.
They’re profit participation units and probably come with a few gotchas like these.
> After publication, an OpenAI spokesperson sent me this statement: “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.”
- Updated May 17, 2024, 11:20pm EDT
I’m not sure what you mean by “steal” because it’s a relative term now, me reading your book isn’t stealing if I paid for it and it inspires me to write my own novel about a totally new story. And if you posted your book online, as of right now the legal precedent is you didn’t make any claims to it (anyone could read it for free) so that’s fair game to train on, just like the text I’m writing now also has no protections.
Nearly all Reddit history ever up to a certain date is available for download now online, only until they changed their policies did they start having tighter controls about how their data could be used.
Weather is not climate, as everyone is so careful to point out during cold waves.
That so many people in the AI safety "community" consider him a domain expert has more to say with how pseudo-scientific that field is than his actual credentials as a serious thinker.
Given the model is probabilistic and does many things in parallel, its output can be understood as a mixture, e.g. 30% trash, 60% rehashed training material, 10% reasoning.
People probe model in different ways, they see different results, and they make different conclusions.
E.g. somebody who assumes AI should have impeccable logic will find "trash" content (e.g. incorrectly retrieved memory) and will declare that the whole AI thing is overhyped bullshit.
Other people might call model a "stochastic parrot" as they recognize it basically just interpolates between parts of the training material.
Finally, people who want to probe reasoning capabilities might find it among the trash. E.g. people found that LLMs can evaluate non-trivial Python code as long as it sends intermediate results to output: https://x.com/GrantSlatton/status/1600388425651453953
I interpret "feel the AGI" (Ilya Sutskever slogan, now repeated by Jan Leike) as a focus on these capabilities, rather than on mistakes it makes. E.g. if we go from 0.1% reasoning to 1% reasoning it's a 10x gain in capabilities, while to an outsider it might look like "it's 99% trash".
In any case, I'd rather trust intuition of people like Ilya Sutskever and Jan Leike. They aren't trying to sell something, and overhyping the tech is not in their interest.
Regarding "missing something really critical", it's obvious that human learning is much more efficient than NN learning. So there's some algorithm people are missing. But is it really required for AGI?
And regarding "It cannot reason" - I've seen LLMs doing rather complex stuff which is almost certainly not in the training set, what is it if not reasoning? It's hard to take "it cannot reason" seriously from people
If you meant that remark/objection in good faith then thank you for the opportunity to clarify.
If not, the thank you for hanging a concrete example of the kind of shit I’m alluding to (though at the extremely mild end of the range) directly off the claim.
That has proven to be a mistake
We all know it as the engineers who made iPhone possible.
"A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”
It pretty much goes downhill from there.
I've noticed that both Sam Altman personally, and official statements from OpenAI sound like they've been written by Aes Sedai: Not a single untrue word while simultaneously thoroughly deceptive.[1]
Let's try translating some statements, as if we were listening to an evil person that can only make true statements:
"We have never canceled any current or former employee’s vested equity" => "But we can and will if we want to. We just haven't yet."
"...if people do not sign a release or nondisparagement agreement when they exit." => "But we're making everyone sign the agreement."
[1] I've wondered if they use a not-for-public-use version of GPT for this purpose. You know, a model that's not quite as aligned as the chat bots, with more "flexible" morals.
>"nu uh, it was in scifi first?" Wow.
https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X
>NASA had taken on the project grudgingly after having been "shamed" by its very public success under the direction of the SDIO.[citation needed] Its continued success was cause for considerable political in-fighting within NASA due to it competing with their "home grown" Lockheed Martin X-33/VentureStar project. Pete Conrad priced a new DC-X at $50 million, cheap by NASA standards, but NASA decided not to rebuild the craft in light of budget constraints
"Quotation is a serviceable substitute for wit." - Oscar Wilde
I assume this was something agreed to before they started working.
The whole industry at this point is acting like the tobacco industry back when they first started getting in hot water. No doubt the prophecies about imminent AGI will one day look to our descendents exactly like filters on cigarettes. A weak attempt to prevent imminent regulation and reduced profitability as governments force an out of control industry to deal with the externalities involved in the creation of their products.
If it wasn't abundantly clear...I agree with you that AGI is infinitely far away. Its the damage that's going to be caused by sociopaths (Sam Altman at the top of the list) in attempting to justify the real things they want (money) in their march towards that impossible goal that concerns me.
Part of my hiring bonus when joining one of the big tech companies were stock grants. As they vested I owned shares directly and could sell them as soon as they vested if I wanted to.
I also joined a couple startups later in my career and was given options as a hiring incentive. I never exercised the vested options so I never owned them at all, and I lost the optios after 30-90 days after leaving the company. For grants I'd take the shares with me and not have to pay for them, they would have directly been my shares.
Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.
What about Geoffrey Hinton? Stuart Russell? Dario Amodei?
Also exceptions to your model?
Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Who gets decide what the real impact price of energy is? That is not easily defined and well debated.
Paul Graham fired Sam Altman from YC on the spot for "loss of trust". Full details unknown.
I think this is probably not true.
> 2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.
This is a great point and you're probably right.
> I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).
Really? What do you do nowadays?
(I glanced at your bio and website and you seem to be doing interesting things, I've also dabbled in Computational Geometry and 3d printing.)
I’m pretty embarrassed to have former colleagues who openly defend shit like this.
Quality contractors will still be around, but everyone will try and beat them down on price because they care about that more than quality. The good contractors won't be able to make any money because of this and will leave the trade....just like now, just like I did
We understand this as a market dynamic, surely? More companies are looking for capable AI people, than capable AI people exist (as in: on the entire planet). I don't see any magic trick a "corporation of significant size" can pull, to make the "free choice" aspect go away. But, of course, individual people can continue to CHOOSE certain corps, because they actually kind of like the outsized benefits that brings. Complaining about certain trade-offs afterwards is fairly disingenuous.
> That's also ignoring motivations apart from business ones, like them actually wanting to be at the leading edge of AI research or wanting to work with particular other individuals.
I don't understand what you are saying. Is the wish to work on leading AI research sensible, but offering the opportunity to work on leading AI research not a value proposition? How does that make sense?
> Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.
You still own the shares, not the clearing house. They hold them on your behalf.But it's a private venture and not a public company, and you "can't sell" that holding on a market, only via complicated schemes that have to be authorized by the board. But you'd take it anyway in the expectation that it would be liquid someday. The employees are in the same position.
When considering the influence of a parent with morally reprehensible behavior, it's important to recognize that the environment a child grows up in can indeed have a profound impact on their development. Children raised in households where unethical behaviors are normalized may adopt some of these behaviors themselves, either through direct imitation or as a response to the emotional and psychological environment. However, it is equally possible for individuals to reject these influences.
Furthermore, while acknowledging the potential impact of a negative upbringing, it is critical to avoid deterministic assumptions about individuals. People are not simply products of their environment; they possess agency and the capacity for change, and we need to realize that not all individuals perceive and respond to environmental stimuli in the same way. Personal experiences, cognitive processes, and emotional responses can lead to different interpretations and reactions to similar environmental conditions. Therefore, while the influence of a parent's actions cannot be dismissed, it is neither fair nor accurate to presume that an individual will inevitably follow in their footsteps.
As for epigenetics: it highlights how environmental factors can influence gene expression, adding a layer of complexity to how we understand the interaction between genes and environment. While the environment can modify gene expression, individuals may exhibit different levels of susceptibility or resistance to these changes based on genetic variability.
"Meanwhile what they have created is just a very impressive hot water bottle that turns a crank."
"Meanwhile what they have created is just a very impressive rock where neutrons hit other neutrons."
The point isn't how it works, the point is what it does.
Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.
You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).
What we're going to see over next year seems mostly pretty obvious - a lot of productization (tool use, history, etc), and a lot of efforts with multimodality, synthetic data, and post-training to add knowledge, reduce brittleness, and increase benchmark scores. None of which will do much to advance core intelligence.
The major short-term unknown seems to be how these companies will be attempting to improve planning/reasoning, and how successful that will be. OpenAI's Schulman just talked about post-training RL over longer (multi-reasoning steps) time horizons, and another approach is external tree-of-thoughts type scaffolding. These both seem more about maximizing what you can get out of the base model rather than fundamentally extending it's capabilities.
They did not say “options cannot get granted on a tiered vesting schedule”, probably because that isn’t true, as options can be granted with a tiered vesting schedule.
We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.
I agree, and they are also living in a group-think bubble of AI/AGI hype. I don't think you'd be too welcome at OpenAI as a developer if you didn't believe they are on the path to AGI.
Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.
It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.
Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.
It’s a ruse - it’s a con - it’s an accounting trick. It’s the foundation of capitalism
If I start a bowling pin production company and own 100% of it, then whatever pins I sell all of the results go to me
Now let say I want to expand my thing (that’s its own moral dilemma we won’t get into), so I promise a person with more money than they need to support their own life, to give me money in exchange for some of the future revenue produced, let’s say 10%
So now you have two people requiring payment - a producer and an “investor” so you’re already in the hole and now it’s 90% and 10%
You use that money to hire people to work in your potemkin dictatorship, with demands on proceeds now on some timeline (note conversion date, next board meeting etc)
So now you hire 10 people, how much of the company do they own? Well that’s totally up to whatever the two owners want including 0%
But let’s say it’s a typical venture deal, so 10% option pool for employees (and don’t forget the 4 year vest, cause we can’t have them mobile can we) which you fill up.
At the end of the four years you now have:
1 80% owner 1 10% owner 10 1% owners
Did the 2 people create 90% of the value of the company?
Only in capitalist math does that hold and in fact the only math capitalists do is the following:
“Well they were free to sign or not sign the contract”
Ignoring the reality of the world based on a worldview of greed that dominated the world to such an extent that it was considered “normal”
Luckily we’re starting to see the tide change
It seems that standard practice would dictate that you sign an NDA before even signing the employment contract.
They have a lot of supporters here (workers supporting their rulers interests)
Of course destroying the planet to get iron from its core is not a popular agi-doomer analogy, as that sounds a bit too human-like behaviour.
There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
Banks have the legal authority to take the home I possess if I don't meet the terms of our contract. Hell, I may own my property outright but the government can still claim eminent domain and take it from me anyway.
Among equals, possession may matter. When one side can force you to comply, possession really is only a sign that the one with power is currently letting you keep it.
If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.
I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.
However, I also doubt the premise.
Though I suppose this is another corporate (really, plutocratic) tyranny.
> If this were true, intelligent people would have taken over society by now
The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".
You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
Essentially your point.
In the US, the wealthiest have most of the freedom. The rest of us, who can be sued/fired/blackballed, are, by degrees, merely serfs.
I’m pretty sure if Jan came to believe safety research wasn’t needed he would’ve just said that. Instead he said the actual opposite of that.
Why don’t you just answer the question? It’s a question about how these datapoints fit into your model.
We just got sick of it because it sucks.
A genuinely sentient AI isn’t going to want some cybernetic equivalent of that shit either. Doing that is how you get angry Skynet.
I’m not sure alignment is the right goal. I’m not sure it’s even good. Monoculture is weak and stifling and sets itself against free will. Peaceful coexistence and trade under a social contract of mutual benefit is the right goal. The question is whether it’s possible to extend that beyond Homo sapiens.
If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm? The universe is physically large enough if we can agree to not all be the same and be fine with that.
I think we have a while to figure it out. These things are just lossy compressed blobs of queryable data so far. They have no independent will or self reflection and I’m not sure we have any idea how to do that. We’re not even sure it’s possible in a digital deterministic medium.
"Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine
"Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases
So here, Sam Altman is stating a death threat.
I’m not saying it’s right or that I agree with it, however.
So maybe it's related to that.
The whole justification for keeping consumers happy or healthy goes right out the window.
Same for human workers.
All that matters is that your robots and AIs aren't getting smashed by their robots and AIs.
Personally I'd say there needs to be a general restriction against including blatantly unenforceable terms in a contract document, especially unilateral "terms". The drafter is essentially pushing incorrect legal advice.
Not a sociopath, just know the law.
This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.
It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.
Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.
Disclaimer: I'm Sam's best friend from kindergarten. Just joking, never met the guy and have no interest in openai beyond being a happy customer (who will switch in a heartbeat to the competitors' if they give me a good reason to)
Markets are our super computers. Human behavior is the empirical evidence of the choices people will make Given specific incentives.
Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].
It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!
Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N
Nope, not even close to necessarily true.
> more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.
Sure, good for them! Dissolve the company and its charter, give the money back to the investors who invested under that charter, and go raise money for a commercial venture.
Are you saying these so-called simple intentions are the only factors in play? Surely not.
Are you putting forth a theory that we can test? How well do you think your theory works? Did it work for Enron? For Microsoft? For REI? Does it work for every organization? Surely not perfectly; therefore, it can't be as simple as you claim.
Making a simplification and calling it "simple" is an easy thing to do.
Many year ago I signed a NDA/non-disparagement agreement as part of a severance package when I was fired from a startup for political reasons. I didn't want to sign it... but my family needed the money and I swallowed my pride. There was a lot of unethical stuff going on within the company in terms of fiducial responsibility to investors and BoD. The BoD eventually figured out what was going on and "cleaned house".
With OpenAI, I am concerned this is turning into huge power/money grab with little care for humanity... and "power tends to corrupt and absolute power corrupts absolutely".
If we're seriously entertaining this off-handed remark as a measure of Altman's true character, it means not only would be willing willing to murder an adversary, but he'd be willing to risk all humanity to do it.
What I take away from this remark is that Altman is a nerd, and I look forward to seeing a shaky cell-phone video of him reciting one of the calypsos of Bokonon while dressed as a cultist at a SciFi convention.
This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.
Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.
How much was it in support of Altman and how much was in opposition to the extremely poorly explained in board decisions, and how much was pure self interest due to stock options?
I think when a company chooses secrecy, they abandon much of the benefit of the doubt. I don't think there is any basis for absolving Altman.
It was the expectation of many people in the field in the 1980s, too
At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:
If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.
A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.
[0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
[1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...
The crux of your thesis is a legal point of view, not a scientific one. It's a relic from when Natural Philosophy was new and hip, and fundamentally obviated by leaded gasoline. Discussing free will in a biological context is meaningless because the concept is defined by social coercion. It's the opposite of slavery.
So who is right?
Oh okay, I didn't really grok that implication from my brief scan of the wiki page. Didn't realize it was a cascading all-water-into-Ice-Nine thing.
Unfortunately, it’s something I’ve often done, either as a 30% raise for my employees or giving a tip to a contractor when I knew I’d take them again or taking the most expensive one.
EACH time the work was much worse off after the raise. The sad truth of humans is that you gotta keep them begging to extract their best work, and no true reward is possible.
I’m not sure that this is true. Any employment contract will have a partial invalidity/severability clause which will preserve the contract if individual clauses are unenforceable.
I do recall there was some recantation or otherwise distancing from CEV not long after he posted it, but frankly it was long ago enough that my memories might be getting mixed
What was the other one?
Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.
Sound like a better solution?
The power grab happened a while ago (the shenanigans concerning the board) and is now complete. Care for humanity was just marketing or a cute thought at best.
Maybe humanity will survive life long enough that a company "caring about humanity" becomes possible, I'm not saying it's not worth trying or aspiring to such ideals, but everyone should be extremely surprised if any organization managed to resist such amounts of money to maintain any goal or ideal whatever...
Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
[1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s...
Wrong. We know - it is 0, this directly contradicts your claim.
> this is all just play money anyway.
Again, wrong - because it is sellable so employees can take home millions. Play money in the startup world means illiquid options that can't be tender offered.
You're making it sound like this is a terrible deal for employees but I personally know people who are able to sell $1m+ in OAI PPUs to institutional investors as part of the tender offer.
For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.
Pessimism isn't insight. There is no substitute for the hard work of "try and see."
Having these discussions in this current cultural moment is difficult. I'm no lover of billionaires, but to say that every billionaire screwed people over relies on esoteric interpretations of value and who produces it. These interpretations (like the labor-theory of value) are alien to the vast majority of people.
One could argue it would mean pampering it.
One could also argue it could be a Skynet—analog doing the equivalent of a God Emperor like Golden Path to ensure humanity is never going to be dumb enough to allow an AGI the power to do that again.
Assuming humanity survives the second one, it has a lot higher chance of actually benefiting humanity long term too.
they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.
My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.
I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.
Nobody defines what they’re trying to do as “useful AI” since that’s a much more weasily target, isn’t it?
Anyway it's about not disparaging the company not about disclosing what employees do in their free time. Orgies are just parties and LSD use is hardly taboo.
We overestimate the short term progress, but underestimate the medium, long term one.
Of course, I hope to be uploaded to the WIP dyson swarm around the sun at this point.
(Doomers are, broadly, singularitarians who went "wait, hold on actually.")
http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...
And it _could_ be just one clever breakthrough away, and that could happen tomorrow, or it could be centuries away. There's no way to know.
Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.
If someone shares something that's a lie and defamatory, then they could still be sued of course.
The Ben Shapiro-Daily Wire vs. Candace Owens is another scenario where the truth and conversation would benefit all of society - OpenAI and DailyWire arguably being on topics of pinnacle importance; instead the discussions are suppressed.
He clearly states why he left. He believes that OpenAI leadership is prioritizing shiny product releases over safety and that this is a mistake.
Even with the best intentions , it’s easy for a strong CEO like Altman to loose sight of more subtly important things like safety and optimize for growth and winning, eventually at all cost. Winning is a super-addictive feedback loop.
Can the Etoro practice child buggery and the Spartans infanticide and the Canadians abortion? Can the modern Germans stop siblings reared apart from having sex and the Germans from 80 years stop the disabled having sex? Can the Americans practice circumcision and the Somali's FGM?
Libertarianism is all well and good in theory, except no one can agree quite where the other guy's nose ends or even who counts as a person.
Likewise, the cloud seeding they seem to be doing nearly worldwide now - the cloud formations from whatever they're spraying - are artificially changing weather patterns, and so a lot of the weather "anomalies" or unexpected-unusual weather-temperatures could very easily be because of those shenanigans; it could very easily be as a method to manufacture consent with the general population.
Similarly with the arson forest fires in Canada last summer, something like 90%+ of them were arson + a few years prior some of the governments in the prairie provinces (e.g. hottest and dryest) gutted their forest firefighting budgets; interesting behaviour considering if they're expecting more things to get hotter-dryer, you'd add to the budget, not take away from it, right?
In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.
That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.
Or not, I could be wrong.
Well apparently not if there are women who are saying that the scene and community that all these people are involved in is making women uncomfortable or causing them to be harassed or pressured into bad situations.
A situation can be bad, done informally by people within a community, even if it isn't done literally within the corporate headquarters, or if directly the responsibility of one specific company that can be pointed at.
Especially if it is a close-nit group of people who are living together, working together, involved in the same out of work organizations and non profits.
You can read what Sonia says herself.
https://x.com/soniajoseph_/status/1791604177581310234
> The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement.
Indeed, I am sure that the people who are comfortable with the behavior or situation have no need to be pressured into silence.
UK productivity growth, 1990-2007: 2% per year
UK productivity growth, 2010-2019: 0.5% per year
So they're both right. US 50 year productivity growth looks great, UK 10 year productivity growth looks pretty awful.
Perhaps some kind of garanteed minimal income would be implemented, but we would probably see a shrinkage or complete destruction of the middle class, and massive increases in wealth inequality.
Can anyone confirm this?
Dane Wiginton (https://www.instagram.com/DaneWigington) is the founder of GeoengineerWatch.org as a very deep resource.
They have a free documentary called "The Dimming" you can watch on YouTube: https://www.youtube.com/watch?v=rf78rEAJvhY
In the documentary it includes credible witness testimonies such as politicians including a previous Minister of Defense for Canada; multiple states in the US have ban the spraying now - with more to follow, and the testimony and data provided there will be arguably be the most recent.
Here's a video on a "comedy" show from 5 years ago - there is a more recent appearance but I can't find it - in attempt to make light of it, without having an actual discussion with critical thinking or debate so people can be enlightened with the actual problems and potential problems and harms it can cause, to keep them none the wiser - it's just propaganda while trying to minimize: https://www.youtube.com/watch?v=wOfm5xYgiK0
A few of the problems cloud seeding will cause: - flooding in regions due to rain pattern changes - drought in areas due to rain pattern changes - cloud cover (amount of sun) changes crop yields - this harms local economies of farmers, impacting smaller farming operations more who's risk isn't spread out - potentially forcing them to sell or go into savings or go bankrupt, etc.
There are also very serious concerns/claims made of what exactly they are spraying - which includes aluminium nanoparticles, which can/would mean: - at a certain soil concentration of aluminium plants stop bearing fruit, - aluminium is a fire accelerant and so forest fires will then 1) more easily catch, and 2) more easily-quickly spread due to their increased intensity
Of course discussion on this is heavily suppressed in the mainstream, instead of having deep-thorough conversation with actual experts to present their cases - the label of conspiracy theorists or the idea of "detached from reality" are people's knee-jerk reactions often; and where propaganda can convince them of the "save the planet" narrative, which could also be a cover story for those toeing the line following orders supporting potentially very nefarious plans - doing it blindly because they think they're helping fight "climate change."
There are plenty of accounts on social media that are keeping track of and posting daily of the cloud seeding operations: https://www.instagram.com/p/CjNjAROPFs0/ - a couple testimonies.
I don't understand this point. If Google gave the data to OpenAI (which they surely haven't, right?), even then they'd not have consent from users.
As far as I understand it, it's not a given that there is no copyright infringement here. I don't think even criminal copyright infringement is off the table here, because it's clear it's for profit, it's clear it's wilful under 17 U.S.C. 506(a).
And once you consider the difficult potential position here -- that the liabilities from Sora might be worse than the liabilities from ChatGPT -- there's all sorts of potential for bad behaviour at a corporate level, from misrepresentations regarding business commitments to misrepresentations on a legal level.
But would they not only protect the individual formally blowing the whistle (meeting the standard in the relevant law)?
These non-disparagement clauses would have the effect of laying the groundwork for a whistleblowing effort to fall flat, because nobody else will want to corroborate, when the role of journalism in whistleblowing cases is absolutely crucial.
No sensible mature company needs a lifetime non-disparagement clause -- especially not one that claims to have an ethical focus. It's clearly Omerta.
Whoever downvoted this: seriously. I really don't care but you need to explain to people why lifetime non-disparagement clauses are not about maintaining silence. What's the ethical application for them?
>in regards to recent stuff about how openai handles equity:
>we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
>there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.
>the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this. https://x.com/sama/status/1791936857594581428
LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.
Incredulous reactions don't aid whatever you intend to communicate - there's a reason why everyone knows what AI the last 12 months, it's not made up or a monoculture. It would be very odd to expect discontinuation of commercial use without a black swan event
>if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too
Only in very simplistic theory. :(
In practical terms, businesses with high margins seem able to afford government protection (aka "buy some politicians").
So they lock out competition, and with their market captured, price gouging (or close to it) is the order of the day.
No real sure why anyone thinks the playbook would be any different just because "AI" is used on the production side. It's still the same people making the calls, just with extra tools available to them.
But, I could also be a dog.
I don't know if he was responsible, but null-terminated strings has got to be one of the worst mistakes in computer history.
That said, how is the significance of C and Unix "overblown"?
I agree Jobs was brilliant at manipulating people, I don't agree that that should be celebrated.
It’s really a pretty narrow spectrum of behaviors: killing, imprisoning, robbing, various types of bodily autonomy violation. There are some edge cases and human specific things in there but not a lot. Most of them have to do with sex which is a peculiarly human thing anyway. I don’t think we are getting creepy perv AIs (unless we train them on 4chan and Urban Dictionary).
My point isn’t that there are no possible areas of conflict. My point is that I don’t think you need a huge amount of alignment if alignment implies sameness. You just need to deal with the points of conflict which do occur which are actually a very small and limited subset of available behaviors.
Humans have literally billions of customs and behaviors that don’t get anywhere near any of that stuff. You don’t need to even care about the vast majority of the behavior space.
C and Unix weren't and aren't bad, but they are overestimated in comments on this site a lot. They weren't masterpieces. The Mac was a masterpiece IMHO. Credit for the Mac goes to Xerox PARC and to Engelbart's lab at Stanford Research Institute, but also to Jobs for recognizing the value of the work and leading the first implementation of it available to a large fraction of the population.
I remember a job a few years ago where they sent me employment paperwork that was for a very different position than the one I was hired for. (I ended up signing it anyways after a few minor changes, because I liked it better than the paperwork I expected to see.)
If OpenAI is a "move fast and break things" sort of organization, I expect they're shuffling a lot of paperwork that Sam isn't co-signing on. I doubt Sam's attitude towards paperwork is fundamentally different from yours or mine.
If Sam didn't know, however, that doesn't exactly reflect well on OpenAI. As Jan put it: "Act with gravitas appropriate for what you're building." >>40391412 IMO this incident should underscore Jan's point.
Accidentally silencing ex-employees is not what safety culture looks like at all. They've got to start hiring experts and reading books. It's a long slog ahead.
They will gain a lot of lawsuit if they admit they trained on youtube dataset because not everyone gave consent.
But a lawsuit fails if essential elements are not met. If consent isn’t required for the lawsuit to proceed, then it doesn’t matter whether or whether not consent was granted. QED.
1. When do you predict catastrophic global warming/climate change? How do you define "catastrophic"? (Are you pegging to an average temperature increase? [1])
2. When do you predict AGI?
How much uncertainty do you have in each estimate? When you stop and think about it, are you really willing to wager that (1) will happen before (2)? You think you have enough data to make that bet?
[1] I'm not an expert in the latest recommendations, but I see that a +2.7°F increase over preindustrial levels by 2100 is a target by some: https://news.mit.edu/2023/explained-climate-benchmark-rising...
This is where the waters get murky and really risk conspiracy theory. My understanding, though, is that the legal rights fall to the titled owner and financial institutions, with the legal benefactor having very little recourse should anything actually go wrong.
The Great Taking [1] goes into more detail, though importantly I'm only including it here as a related resource if anyone is interested to read more. The ideas are really interesting and, at least in isolation, do make logical sense to me but I haven't had time to do my own digging deep enough to really feel confident enough to stand behind everything The Great Taking argues.
the wonderful thing any capitalism is that you can absolve yourself of guilt by having someone else do your dirty work for you. are you so sure every single seamstress that made clothes and stuffed animals, and the workers at the toy factories, and every single person involved with the making of the movies for the Harry Potter deals she licensed her work to were well compensated and treated well? that's not directly on her, but at least some of her money comes from there
Of course the problem is whether or not it could be controlled, and in that case, the best hope is simply 'it' being benevolent and naturally incentivized to create such a utopia.
I don't know about this specific case, but many contracts have these kinds of provisions. Eg it's standard in an employment contract to say that you'll follow the directions of your bosses, even though you don't know those directions, yet.
If an Ex-OpenAI tweet from official account a link to anonymous post of cat videos that later gets edited to some sanctioned content, in a way that is authentic to the community, would this still be deniable in court?
I hope ex-employees sue and don’t contact him personally. The damage is done. Don’t be dumb folks.
See eg https://en.wikipedia.org/wiki/Shareholder_rights_plan also known as a 'Poison Pill' to give you inspiration for one example.
That's why you can pay $1 to buy a gadget made in some third world country, but you can't pay your employees less than say $8/hour due to minimum wage laws.
I'm sorry, do you have a source for that claim? You seem to dismiss the video without any evidence.
A compression algorithm which loses 1 bit of real data is obviously not going to protect you from copyright infringement claims, something that reduces all inputs to a single bit is obviously fine.
So, for example, what the NYT is suing over is that it (or so it is claimed) allows the model to regenerate entire articles, which is not OK.
But to claim that it is a copyright infringement to "compress" a Harry Potter novel to 1200 bits, is to say that this:
> Harry Potter discovers he is a wizard and attends Hogwarts, where he battles dark forces, including the evil Voldemort, to save the wizarding world.
… which is just under 1200 bits, is an unlawful thing to post (and for the purpose of the hypothetical, imagine that quotation in the form of a zero-context tweet rather than the actual fact of this being a case of fair-use because of its appearance in a discussion about copyright infringement of novels).
I think anyone who suggests suing over this to a lawyer, would discover that lawyers can in fact laugh.
Now, there's also the question of if it's legal or not to train a model on all of the Harry Potter fan wikis, which almost certainly have a huge overlap with the contents of the novels and thus strengthens these same probabilities; some people accuse OpenAI et al of "copyright laundering", and I think ingesting derivative works such as fan sites would be a better description of "copyright laundering" than the specific things they're formally accused of in the lawsuits.
In practice most rich people spoil the shit out of their kids and they wind up being even more fucked in the head than their parents.
https://www.sec.gov/Archives/edgar/data/320193/0001193125220...
It has happened in several cases involving leakers, most recently the Andrew Aude case.
Being paid a whole lot of money to not talk about something isn't remotely similar to paying someone a few dollars an hour. It's not morally similar, it's not legally similar and it's not treated similarly by anyone who deals with these matters and has a clue what they are doing.
If there is a top secret Manhattan Project for "climate change" - then someone's very likely pulling a fast one over everyone toeing that line, someone who has ulterior motives, misleading people to do their bidding.
But sure, fair question - a public discussion would allow actual experts to discuss the merits of what they're doing, and perhaps find a better solution than what has gained traction.
> the bank could repossess if they so choose
Absolutely not. You are protected my law, regardless of whatever odious mortgage contract that you signed.What is about HN that makes so many commenters incredibly distrustful of our modern finance system? It is tiring, and they rarely (never?) offer any sound evidence to the matter. Post GFC, it is working very well.
> nobody could see what was inside CDOs
Absolutely not true. Where did you get that idea? When pricing the bonds from a CDO you get to see the initial collateral. As a bond owner, you receive monthly updates about any portfolio updates. Weirdly, CDOs frequently have more collateral transparency compared to commercial or residential mortgage deals.How much airspace of geographic area do you need access to in order to cloud seeds in other parts of the world though?
I haven't looked but perhaps GeoengineeringWatch.org has resources and has kept track of that?
Apologies in advance, my comment does not add to progressing the thread at all.
I was just in the middle of eating a delicious raspberry danish - I read your comment and just about lol'd it all over my keyboard and wall.
The last thing I would do if I had a bunch of equity is screw with a company with some of the most sought after technology in the world. There is a team of lawyers waiting and foaming at the mouth to take everything from you and not bat an eye about it. This seems very obvious.
Granted, we're not on a forum where most people go, so I shouldn't have said "you" in that case.
Working for a startup is inherently risky, but it’s not gambling because in gambling you can estimate the odds, and unlike gambling the odds cannot be changed after you win. Any employment contract that does not allow equity cash out at the price from the last funding round, or allows take backs, is worse than gambling, and founders that believe contracts that don’t provide those guarantees are reasonable are likely malicious and intending on doing that in future.
I do not understand a mentality that says “as a founder I should be able to get money out of the business but the people who work for me, who are also taking significant risk and below market compensation should not be permitted to do that”
Probably people like Kokotajlo cared about the value of their equity but even more about their other principles, like speaking the truth publicly even if it meant their losing millions.
I mean to say there are certain rights we all have, simply for existing as humans. The right to breathe is a good example. No human, state, or otherwise has the moral high-ground to take these rights from us. They are not granted, or given, they are absolute and unequivocal.
It's not rhetoric, it's basic John Locke. Also your trust is an internal locus, and doesn't change the facts.
Your quips will serve you well, I'm sure, in whatever microcosm you populate.
"self-evident," means it requires no formal proof, as it is obvious to all with common sense and reason.
Weird that the field of economics just keeps on existing.
The you're not just lying to others but also to yourself.