https://www.clarkhill.com/news-events/news/the-importance-of...
https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
It does not prevent you from entering into contracts with other private entities, like your company, about what THEY allow you to say or not. In this case there might be other laws about whether a company can unilaterally force that on you after the fact, but that's not a free speech consideration, just a contract dispute.
See https://www.themuse.com/advice/non-disparagement-clause-agre...
Consider for example that when Amazon bought the Ring security camera system, it had a “god mode” that allowed executives and a team in Ukraine unlimited access to all camera data. It wasn’t just a consumer product for home users, it was a mass surveillance product for the business owners:
https://theintercept.com/2019/01/10/amazon-ring-security-cam...
The EFF has more information on other privacy issues with that system:
https://www.eff.org/deeplinks/2019/08/amazons-ring-perfect-s...
These big companies and their executives want power. Withholding huge financial gain from ex employees to maintain their silence is one way of retaining that power.
https://openai.com/index/introducing-superalignment/
> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
> While superintelligence seems far off now, we believe it could arrive this decade.
> Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:
> How do we ensure AI systems much smarter than humans follow human intent?
> Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.
0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)
This is the article that the author talks about on X.
https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...
And here is a more detailed explanation:
From the article:
“““
It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
”””
[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...
> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
https://nitter.poast.org/janleike/status/1791498174659715494
Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.
https://en.wikipedia.org/wiki/Peppercorn_(law)
There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.
I think it may time for something like this: https://www.openailetter.org/
My guess would be that YC founders like sama have some sort of special power to slap down comments that they feel are violating HN discussion guidelines.
> PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M
> The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.
This NDA wrinkle is another negative. Honestly I think the entire OpenAI compensation model is smoke and mirrors which is normal for startups and obviously inferior to RSUs.
Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.
The vast majority of datacenters currently in production will be entirely powered by carbon free energy. From best to worst:
1. Meta: 100% renewable
2. AWS: 90% renewable
3. Google: 64% renewable with 100% renewable energy credit matching
4. Azure: 100% carbon neutral
[1]: https://sustainability.fb.com/energy/
[2]: https://sustainability.aboutamazon.com/products-services/the...
[3]: https://sustainability.google/progress/energy/
[4]: https://azure.microsoft.com/en-us/explore/global-infrastruct...
>...
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
From OpenAI's charter: https://openai.com/charter/
Now read Jan Leike's departure statement: >>40391412
That's why this is everyone's business.
What made you think it was the next Enron five years ago?
I appreciate you having the guts to stand up to them.
There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.
[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...
Because that's the distinction being argued here: it's "a handful"[0] of probabilities, not the complete work.
[0] I'm not sold on the phrasing "a handful", but I don't care enough to argue terminology; the term "handful" feels like it's being used in a sorites paradox kind of way: https://en.wikipedia.org/wiki/Sorites_paradox
https://www.nytimes.com/2023/12/27/business/media/new-york-t... https://www.reuters.com/legal/us-newspapers-sue-openai-copyr... https://www.washingtonpost.com/technology/2024/04/09/openai-...
Some decided to make deals instead
https://www.federalregister.gov/documents/2023/03/16/2023-05...
So I think the law, at least as currently interpreted, does care about the process.
Though maybe you meant as to whether a new work infringes existing copyright? As this guidance is clearly about new copyright.
An ai-enhanced Photoshop, however, could do wonders though as the base capabilities seem to be mostly there. Haven't used any of the newer ai stuff myself but https://www.shruggingface.com/blog/how-i-used-stable-diffusi... makes it pretty clear the building blocks seem largely there. So my guess is the main disconnect is in making the machines understand natural language instructions for how to change the art.
https://www.reddit.com/r/OpenAI/comments/1804u5y/former_open...
1. If he didn't turn down the money, you wouldn't have heard of him at all;
2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.
3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.
Quote (from [1]):
From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*
This explanation is confusing only to someone who has never tried to get a tenured position in academia.
Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.
He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.
Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.
I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).
[1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...
Given the model is probabilistic and does many things in parallel, its output can be understood as a mixture, e.g. 30% trash, 60% rehashed training material, 10% reasoning.
People probe model in different ways, they see different results, and they make different conclusions.
E.g. somebody who assumes AI should have impeccable logic will find "trash" content (e.g. incorrectly retrieved memory) and will declare that the whole AI thing is overhyped bullshit.
Other people might call model a "stochastic parrot" as they recognize it basically just interpolates between parts of the training material.
Finally, people who want to probe reasoning capabilities might find it among the trash. E.g. people found that LLMs can evaluate non-trivial Python code as long as it sends intermediate results to output: https://x.com/GrantSlatton/status/1600388425651453953
I interpret "feel the AGI" (Ilya Sutskever slogan, now repeated by Jan Leike) as a focus on these capabilities, rather than on mistakes it makes. E.g. if we go from 0.1% reasoning to 1% reasoning it's a 10x gain in capabilities, while to an outsider it might look like "it's 99% trash".
In any case, I'd rather trust intuition of people like Ilya Sutskever and Jan Leike. They aren't trying to sell something, and overhyping the tech is not in their interest.
Regarding "missing something really critical", it's obvious that human learning is much more efficient than NN learning. So there's some algorithm people are missing. But is it really required for AGI?
And regarding "It cannot reason" - I've seen LLMs doing rather complex stuff which is almost certainly not in the training set, what is it if not reasoning? It's hard to take "it cannot reason" seriously from people
>"nu uh, it was in scifi first?" Wow.
https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X
>NASA had taken on the project grudgingly after having been "shamed" by its very public success under the direction of the SDIO.[citation needed] Its continued success was cause for considerable political in-fighting within NASA due to it competing with their "home grown" Lockheed Martin X-33/VentureStar project. Pete Conrad priced a new DC-X at $50 million, cheap by NASA standards, but NASA decided not to rebuild the craft in light of budget constraints
"Quotation is a serviceable substitute for wit." - Oscar Wilde
Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.
You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).
We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.
Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.
It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.
Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.
"Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine
"Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases
So here, Sam Altman is stating a death threat.
So maybe it's related to that.
Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].
It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!
Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N
At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:
If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.
A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.
[0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
[1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...
[1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s...
Anyway it's about not disparaging the company not about disclosing what employees do in their free time. Orgies are just parties and LSD use is hardly taboo.
http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...
He clearly states why he left. He believes that OpenAI leadership is prioritizing shiny product releases over safety and that this is a mistake.
Even with the best intentions , it’s easy for a strong CEO like Altman to loose sight of more subtly important things like safety and optimize for growth and winning, eventually at all cost. Winning is a super-addictive feedback loop.
Well apparently not if there are women who are saying that the scene and community that all these people are involved in is making women uncomfortable or causing them to be harassed or pressured into bad situations.
A situation can be bad, done informally by people within a community, even if it isn't done literally within the corporate headquarters, or if directly the responsibility of one specific company that can be pointed at.
Especially if it is a close-nit group of people who are living together, working together, involved in the same out of work organizations and non profits.
You can read what Sonia says herself.
https://x.com/soniajoseph_/status/1791604177581310234
> The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement.
Indeed, I am sure that the people who are comfortable with the behavior or situation have no need to be pressured into silence.
Dane Wiginton (https://www.instagram.com/DaneWigington) is the founder of GeoengineerWatch.org as a very deep resource.
They have a free documentary called "The Dimming" you can watch on YouTube: https://www.youtube.com/watch?v=rf78rEAJvhY
In the documentary it includes credible witness testimonies such as politicians including a previous Minister of Defense for Canada; multiple states in the US have ban the spraying now - with more to follow, and the testimony and data provided there will be arguably be the most recent.
Here's a video on a "comedy" show from 5 years ago - there is a more recent appearance but I can't find it - in attempt to make light of it, without having an actual discussion with critical thinking or debate so people can be enlightened with the actual problems and potential problems and harms it can cause, to keep them none the wiser - it's just propaganda while trying to minimize: https://www.youtube.com/watch?v=wOfm5xYgiK0
A few of the problems cloud seeding will cause: - flooding in regions due to rain pattern changes - drought in areas due to rain pattern changes - cloud cover (amount of sun) changes crop yields - this harms local economies of farmers, impacting smaller farming operations more who's risk isn't spread out - potentially forcing them to sell or go into savings or go bankrupt, etc.
There are also very serious concerns/claims made of what exactly they are spraying - which includes aluminium nanoparticles, which can/would mean: - at a certain soil concentration of aluminium plants stop bearing fruit, - aluminium is a fire accelerant and so forest fires will then 1) more easily catch, and 2) more easily-quickly spread due to their increased intensity
Of course discussion on this is heavily suppressed in the mainstream, instead of having deep-thorough conversation with actual experts to present their cases - the label of conspiracy theorists or the idea of "detached from reality" are people's knee-jerk reactions often; and where propaganda can convince them of the "save the planet" narrative, which could also be a cover story for those toeing the line following orders supporting potentially very nefarious plans - doing it blindly because they think they're helping fight "climate change."
There are plenty of accounts on social media that are keeping track of and posting daily of the cloud seeding operations: https://www.instagram.com/p/CjNjAROPFs0/ - a couple testimonies.
>in regards to recent stuff about how openai handles equity:
>we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
>there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.
>the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this. https://x.com/sama/status/1791936857594581428
I remember a job a few years ago where they sent me employment paperwork that was for a very different position than the one I was hired for. (I ended up signing it anyways after a few minor changes, because I liked it better than the paperwork I expected to see.)
If OpenAI is a "move fast and break things" sort of organization, I expect they're shuffling a lot of paperwork that Sam isn't co-signing on. I doubt Sam's attitude towards paperwork is fundamentally different from yours or mine.
If Sam didn't know, however, that doesn't exactly reflect well on OpenAI. As Jan put it: "Act with gravitas appropriate for what you're building." >>40391412 IMO this incident should underscore Jan's point.
Accidentally silencing ex-employees is not what safety culture looks like at all. They've got to start hiring experts and reading books. It's a long slog ahead.
1. When do you predict catastrophic global warming/climate change? How do you define "catastrophic"? (Are you pegging to an average temperature increase? [1])
2. When do you predict AGI?
How much uncertainty do you have in each estimate? When you stop and think about it, are you really willing to wager that (1) will happen before (2)? You think you have enough data to make that bet?
[1] I'm not an expert in the latest recommendations, but I see that a +2.7°F increase over preindustrial levels by 2100 is a target by some: https://news.mit.edu/2023/explained-climate-benchmark-rising...
This is where the waters get murky and really risk conspiracy theory. My understanding, though, is that the legal rights fall to the titled owner and financial institutions, with the legal benefactor having very little recourse should anything actually go wrong.
The Great Taking [1] goes into more detail, though importantly I'm only including it here as a related resource if anyone is interested to read more. The ideas are really interesting and, at least in isolation, do make logical sense to me but I haven't had time to do my own digging deep enough to really feel confident enough to stand behind everything The Great Taking argues.
See eg https://en.wikipedia.org/wiki/Shareholder_rights_plan also known as a 'Poison Pill' to give you inspiration for one example.
https://www.sec.gov/Archives/edgar/data/320193/0001193125220...
It has happened in several cases involving leakers, most recently the Andrew Aude case.