zlacker

OpenAI departures: Why can’t former employees talk?

submitted by fnbr+(OP) on 2024-05-17 18:55:02 | 1254 points 916 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩◪
7. _delir+sp[view] [source] [discussion] 2024-05-17 22:22:41
>>orions+c5
That appears to be the case, although the wording of what they agree to up front is considerably more vague than the agreement they're reportedly presented to sign post-departure. Link to a thread from the author of the Vox article: https://x.com/KelseyTuoc/status/1791584341669396560
◧◩
11. apsec1+5t[view] [source] [discussion] 2024-05-17 22:56:32
>>autono+Rs
Non-disparagement clauses forbid all negative statements, whether true or not.

https://www.clarkhill.com/news-events/news/the-importance-of...

◧◩
16. apsec1+At[view] [source] [discussion] 2024-05-17 23:02:04
>>0cf861+0t
The Vox article says that it's a lifetime agreement:

https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

◧◩
29. waveso+hu[view] [source] [discussion] 2024-05-17 23:08:36
>>OldMat+kt
Their head of alignment just resigned >>40391299
31. thorum+Bu[view] [source] 2024-05-17 23:10:57
>>fnbr+(OP)
Extra respect is due to Jan Leike, then:

https://x.com/janleike/status/1791498174659715494

◧◩◪◨
38. solard+iv[view] [source] [discussion] 2024-05-17 23:18:34
>>a_wild+2u
In the US, the Constitution prevents the government from regulating your speech.

It does not prevent you from entering into contracts with other private entities, like your company, about what THEY allow you to say or not. In this case there might be other laws about whether a company can unilaterally force that on you after the fact, but that's not a free speech consideration, just a contract dispute.

See https://www.themuse.com/advice/non-disparagement-clause-agre...

◧◩◪
48. gremli+gw[view] [source] [discussion] 2024-05-17 23:28:35
>>throwu+Tt
Plus what he (allegedly) did to his sister when she was a child: >>37785072
◧◩◪◨
52. Taylor+nw[view] [source] [discussion] 2024-05-17 23:29:18
>>a_wild+2u
I think we need to face the fact that these companies aren’t trustworthy in upholding their own stated morals. We need to consider whether streaming video from our phone to a complex AI system that can interpret everything it sees might have longer term privacy implications. When you think about it, a cloud AI system is an incredible surveillance machine. You want to talk to it about important questions in your life, and it would also be capable of dragnet surveillance based on complex concepts like “show me all the people organizing protests” etc.

Consider for example that when Amazon bought the Ring security camera system, it had a “god mode” that allowed executives and a team in Ukraine unlimited access to all camera data. It wasn’t just a consumer product for home users, it was a mass surveillance product for the business owners:

https://theintercept.com/2019/01/10/amazon-ring-security-cam...

The EFF has more information on other privacy issues with that system:

https://www.eff.org/deeplinks/2019/08/amazons-ring-perfect-s...

These big companies and their executives want power. Withholding huge financial gain from ex employees to maintain their silence is one way of retaining that power.

◧◩◪
78. thorum+My[view] [source] [discussion] 2024-05-17 23:51:39
>>a_wild+Xv
The superalignment team was not focused on that kind of “safety” AFAIK. According to the blog post announcing the team,

https://openai.com/index/introducing-superalignment/

> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

> While superintelligence seems far off now, we believe it could arrive this decade.

> Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

> How do we ensure AI systems much smarter than humans follow human intent?

> Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

◧◩◪◨⬒
82. ctoth+5z[view] [source] [discussion] 2024-05-17 23:54:14
>>N0b8ez+wy
https://www.dwarkeshpatel.com/p/john-schulman
◧◩
86. istjoh+Hz[view] [source] [discussion] 2024-05-17 23:59:46
>>photoc+5v
> In most cases there is no free exercise whatever of the judgment or of the moral sense; but they put themselves on a level with wood and earth and stones; and wooden men can perhaps be manufactured that will serve the purpose as well. Such command no more respect than men of straw or a lump of dirt.[0]

0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)

110. fragme+6C[view] [source] 2024-05-18 00:26:16
>>fnbr+(OP)
It's time to find a lawyer. I'm not one but there's an intersection with California SB 331, also known as “The Silenced No More Act”. while it is focused more on sexual harrasment, it's not limited to that, and these contracts may run afoul of that.

https://silencednomore.org/the-silenced-no-more-act

121. croeme+gD[view] [source] 2024-05-18 00:38:16
>>fnbr+(OP)
Link should probably go here instead of X: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

This is the article that the author talks about on X.

◧◩◪◨⬒⬓
144. thorum+fF[view] [source] [discussion] 2024-05-18 01:02:42
>>api+pD
CEV is one possible answer to this question that has been proposed. Wikipedia has a good short explanation here:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

And here is a more detailed explanation:

https://intelligence.org/files/CEV.pdf

149. jay-ba+DF[view] [source] 2024-05-18 01:06:22
>>fnbr+(OP)
It probably would be better to switch the link from the X post to the Vox article [0].

From the article:

“““

It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

”””

[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

◧◩◪◨⬒
151. thorum+SF[view] [source] [discussion] 2024-05-18 01:09:07
>>RcouF1+HB
You might be interested in how CEV, one framework proposed for superalignment, addresses that concern:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

◧◩◪
186. 0xDEAF+gJ[view] [source] [discussion] 2024-05-18 01:54:14
>>ambica+ZH
I think you have to either log in to X or use a frontend if you want to read the entire thread. Here's a frontend

https://nitter.poast.org/janleike/status/1791498174659715494

◧◩
187. dang+HJ[view] [source] [discussion] 2024-05-18 02:00:32
>>jay-ba+DF
(Parent comment was posted to >>40394778 before we merged that thread hither.)
◧◩
201. eru+PL[view] [source] [discussion] 2024-05-18 02:29:53
>>Button+7J
> What's the consideration for this contract?

Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.

https://en.wikipedia.org/wiki/Peppercorn_(law)

There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.

◧◩
209. 0xDEAF+TM[view] [source] [discussion] 2024-05-18 02:47:31
>>yumraj+5L
Similar points made here, if anyone is interested in signing: https://www.openailetter.org/
◧◩
211. 0xDEAF+6N[view] [source] [discussion] 2024-05-18 02:52:17
>>thorum+Bu
At the end of the thread, he says he thinks OpenAI can "ship" the culture changes necessary for safety. That seems kind of implausible to me? So many safety staffers have quit over the past few years. If Jan really thought change was possible, why isn't he still working at OpenAI, trying to make it happen from the inside?

I think it may time for something like this: https://www.openailetter.org/

◧◩
215. 0xDEAF+LN[view] [source] [discussion] 2024-05-18 03:07:24
>>31337L+EA
Saw this comment suddenly move way down in the comment rankings. Somehow I only notice this happening on OpenAI threads:

>>38342850

My guess would be that YC founders like sama have some sort of special power to slap down comments that they feel are violating HN discussion guidelines.

◧◩◪◨
224. almost+gO[view] [source] [discussion] 2024-05-18 03:18:41
>>whimsi+3H
> Note at offer time candidates do not know how many PPUs they will be receiving or how many exist in total. This is important because it’s not clear to candidates if they are receiving 1% or 0.001% of profits for instance. Even when giving options, some startups are often unclear or simply do not share the total number of outstanding shares. That said, this is generally considered bad practice and unfavorable for employees. Additionally, tender offers are not guaranteed to happen and the cadence may also not be known.

> PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M

> The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.

This NDA wrinkle is another negative. Honestly I think the entire OpenAI compensation model is smoke and mirrors which is normal for startups and obviously inferior to RSUs.

https://www.levels.fyi/blog/openai-compensation.html

◧◩
260. sangno+aR[view] [source] [discussion] 2024-05-18 04:21:10
>>atomic+lM
> if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.

Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.

1. https://en.wikipedia.org/wiki/Royal_lives_clause

◧◩◪◨
271. xvecto+QR[view] [source] [discussion] 2024-05-18 04:32:51
>>candid+YO
Most existing big tech datacenters use mostly carbon free or renewable energy.

The vast majority of datacenters currently in production will be entirely powered by carbon free energy. From best to worst:

1. Meta: 100% renewable

2. AWS: 90% renewable

3. Google: 64% renewable with 100% renewable energy credit matching

4. Azure: 100% carbon neutral

[1]: https://sustainability.fb.com/energy/

[2]: https://sustainability.aboutamazon.com/products-services/the...

[3]: https://sustainability.google/progress/energy/

[4]: https://azure.microsoft.com/en-us/explore/global-infrastruct...

◧◩◪◨⬒⬓
304. oblio+XT[view] [source] [discussion] 2024-05-18 05:16:19
>>saalwe+3O
Based on this they've had $1tn profits since 2009: https://companiesmarketcap.com/apple/earnings/
◧◩
306. r721+ZT[view] [source] [discussion] 2024-05-18 05:17:06
>>thorum+Bu
Discussion of Jan Leike's thread: >>40391412 (67 comments)
◧◩
395. 0xDEAF+W01[view] [source] [discussion] 2024-05-18 07:07:59
>>mise_e+TX
>OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

>...

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.

From OpenAI's charter: https://openai.com/charter/

Now read Jan Leike's departure statement: >>40391412

That's why this is everyone's business.

◧◩
405. 0xDEAF+L11[view] [source] [discussion] 2024-05-18 07:23:16
>>benree+sY
There's another comment saying something sort of similar elsewhere in this thread: >>40396366

What made you think it was the next Enron five years ago?

I appreciate you having the guts to stand up to them.

◧◩
412. nurple+221[view] [source] [discussion] 2024-05-18 07:26:45
>>Button+7J
The thing is that this is a private company, so there is no public market to provide liquidity. The company can make itself the sole source of liquidity, at its option, by placing sell restrictions on the grants. Toe the line, or you will find you never get to participate in a liquidity event.

There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.

[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...

442. iamfli+E41[view] [source] 2024-05-18 08:05:25
>>fnbr+(OP)
Doesn’t seem to be everyone - https://x.com/officiallogank/status/1791652970670747909
◧◩◪◨⬒⬓⬔⧯
453. ben_w+g51[view] [source] [discussion] 2024-05-18 08:17:15
>>andrep+C41
Can you get that book out of an LLM?

Because that's the distinction being argued here: it's "a handful"[0] of probabilities, not the complete work.

[0] I'm not sold on the phrasing "a handful", but I don't care enough to argue terminology; the term "handful" feels like it's being used in a sorites paradox kind of way: https://en.wikipedia.org/wiki/Sorites_paradox

◧◩◪◨⬒⬓⬔⧯
470. surfin+f61[view] [source] [discussion] 2024-05-18 08:31:30
>>throwa+f51
My analogy is based on the fact that nobody could see what was inside CDOs nor did they want to see, all they wanted to do was pass them on to the next sucker. It was all fun until it all blew up. LLM operators behave in the same way with copyrighted material. For context, read https://nymag.com/news/business/55687/
◧◩◪◨⬒⬓⬔⧯
491. dgolds+O71[view] [source] [discussion] 2024-05-18 08:48:00
>>throwa+u51
Some of them are suing

https://www.nytimes.com/2023/12/27/business/media/new-york-t... https://www.reuters.com/legal/us-newspapers-sue-openai-copyr... https://www.washingtonpost.com/technology/2024/04/09/openai-...

Some decided to make deals instead

◧◩◪◨
496. andyjo+981[view] [source] [discussion] 2024-05-18 08:52:50
>>bazoom+h71
Indeed it is. Obligatory xkcd - https://xkcd.com/1494/
◧◩◪◨⬒⬓⬔⧯
499. dgolds+g81[view] [source] [discussion] 2024-05-18 08:54:50
>>KoolKa+x61
I don't think that's accurate. The us copyright office last year issued guidance that basically said anything generated with ai can't be copyrighted, as human authorship/creation is required for copyright. Works can incorporate ai generated content but then those parts aren't covered by copyright.

https://www.federalregister.gov/documents/2023/03/16/2023-05...

So I think the law, at least as currently interpreted, does care about the process.

Though maybe you meant as to whether a new work infringes existing copyright? As this guidance is clearly about new copyright.

◧◩◪◨⬒⬓
523. dgolds+Wa1[view] [source] [discussion] 2024-05-18 09:38:28
>>Gravit+771
I recently read an article that I annoyingly can't find again about an art director at a company that decided to hire some prompters. They got some art, told them to completely change it, got other art, told them to make smaller changes... And then got nothing useful as the prompters couldn't tell the ai "like that but make this change". Ai art may get there in a few years or maybe a decade or two, but it's not there yet. (End of that article: they fired the prompters after a few days)

An ai-enhanced Photoshop, however, could do wonders though as the base capabilities seem to be mostly there. Haven't used any of the newer ai stuff myself but https://www.shruggingface.com/blog/how-i-used-stable-diffusi... makes it pretty clear the building blocks seem largely there. So my guess is the main disconnect is in making the machines understand natural language instructions for how to change the art.

◧◩◪◨⬒
538. FartyM+Ac1[view] [source] [discussion] 2024-05-18 10:09:14
>>comboy+kb1
People who have worked with him have publicly called him a manipulative liar:

https://www.reddit.com/r/OpenAI/comments/1804u5y/former_open...

◧◩◪◨⬒
539. romwel+Ec1[view] [source] [discussion] 2024-05-18 10:09:54
>>edanm+pW
Consider this:

1. If he didn't turn down the money, you wouldn't have heard of him at all;

2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.

3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.

Quote (from [1]):

From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*

This explanation is confusing only to someone who has never tried to get a tenured position in academia.

Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.

He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.

Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.

I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).

[1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...

[2] https://en.wikipedia.org/wiki/Alexander_Grothendieck

◧◩◪◨
541. throwu+Kc1[view] [source] [discussion] 2024-05-18 10:11:13
>>solida+vX
https://www.nlrb.gov/news-outreach/news-story/board-rules-th...

It’s a recent ruling.

◧◩◪◨⬒⬓⬔⧯▣▦▧
560. d4704+rf1[view] [source] [discussion] 2024-05-18 10:55:37
>>Hnrobe+ia1
We’d usually point people here to get a better overview of how options work:

https://carta.com/learn/equity/stock-options/

◧◩◪◨⬒
567. jbelli+Zf1[view] [source] [discussion] 2024-05-18 11:01:59
>>riehwv+4X
I don't think rght to repurchase is routine. It was a scandal a few years ago when it turned out that Skype did that. https://www.forbes.com/sites/dianahembree/2018/01/10/startup...
◧◩◪
594. killer+Hk1[view] [source] [discussion] 2024-05-18 11:58:56
>>adamta+dH
I have a theory why people end up with wildly different estimates...

Given the model is probabilistic and does many things in parallel, its output can be understood as a mixture, e.g. 30% trash, 60% rehashed training material, 10% reasoning.

People probe model in different ways, they see different results, and they make different conclusions.

E.g. somebody who assumes AI should have impeccable logic will find "trash" content (e.g. incorrectly retrieved memory) and will declare that the whole AI thing is overhyped bullshit.

Other people might call model a "stochastic parrot" as they recognize it basically just interpolates between parts of the training material.

Finally, people who want to probe reasoning capabilities might find it among the trash. E.g. people found that LLMs can evaluate non-trivial Python code as long as it sends intermediate results to output: https://x.com/GrantSlatton/status/1600388425651453953

I interpret "feel the AGI" (Ilya Sutskever slogan, now repeated by Jan Leike) as a focus on these capabilities, rather than on mistakes it makes. E.g. if we go from 0.1% reasoning to 1% reasoning it's a 10x gain in capabilities, while to an outsider it might look like "it's 99% trash".

In any case, I'd rather trust intuition of people like Ilya Sutskever and Jan Leike. They aren't trying to sell something, and overhyping the tech is not in their interest.

Regarding "missing something really critical", it's obvious that human learning is much more efficient than NN learning. So there's some algorithm people are missing. But is it really required for AGI?

And regarding "It cannot reason" - I've seen LLMs doing rather complex stuff which is almost certainly not in the training set, what is it if not reasoning? It's hard to take "it cannot reason" seriously from people

◧◩◪◨⬒⬓
606. colibr+zm1[view] [source] [discussion] 2024-05-18 12:16:08
>>candid+5S
Or we could use microwaves to drill holes as deep as 20km to tap geothermal energy anywhere in the world

https://www.quaise.energy/

◧◩◪◨⬒⬓⬔⧯▣
612. colibr+mo1[view] [source] [discussion] 2024-05-18 12:32:08
>>treme+3m1
No, in fact I'm praising Musk for his project management abilities and his ability to take risks.

>"nu uh, it was in scifi first?" Wow.

https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X

>NASA had taken on the project grudgingly after having been "shamed" by its very public success under the direction of the SDIO.[citation needed] Its continued success was cause for considerable political in-fighting within NASA due to it competing with their "home grown" Lockheed Martin X-33/VentureStar project. Pete Conrad priced a new DC-X at $50 million, cheap by NASA standards, but NASA decided not to rebuild the craft in light of budget constraints

"Quotation is a serviceable substitute for wit." - Oscar Wilde

◧◩◪◨⬒⬓⬔
622. aleph_+dr1[view] [source] [discussion] 2024-05-18 12:59:07
>>benree+mm1
> The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.

Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

◧◩◪◨⬒⬓⬔⧯▣
656. _heimd+HB1[view] [source] [discussion] 2024-05-18 14:21:47
>>throwa+ox1
Looks like I used the wrong term there, sorry. I was referring to Cede & Co, and in the moment assumed they could be considered a clearing house. It is technically called a certificate depository, sorry for the confusion there.

Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.

You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).

[1] https://en.m.wikipedia.org/wiki/Cede_and_Company

◧◩◪◨⬒
667. awesom+yD1[view] [source] [discussion] 2024-05-18 14:38:33
>>concor+uW
See this great video from Sabine Hossenfelder here: https://www.youtube.com/watch?v=4S9sDyooxf4

We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.

◧◩◪
668. doctor+ZD1[view] [source] [discussion] 2024-05-18 14:42:54
>>paulry+wB1
It's a taught ideology/theory, the great man theory: https://en.m.wikipedia.org/wiki/Great_man_theory
◧◩◪◨⬒⬓
671. SJC_Ha+hF1[view] [source] [discussion] 2024-05-18 14:53:18
>>romwel+Ec1
> 1. If he didn't turn down the money, you wouldn't have heard of him at all;

Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.

It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.

Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.

◧◩◪◨⬒⬓⬔
697. raverb+ML1[view] [source] [discussion] 2024-05-18 15:44:25
>>llamai+hq1
https://x.com/ylecun/status/1791850158344249803
◧◩◪◨⬒⬓⬔
705. dmoy+WM1[view] [source] [discussion] 2024-05-18 15:55:33
>>benree+mm1
For anyone else like me who hasn't read Kurt Vonnegut, but does know about different ice states (e.g. Ice IX):

"Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine

"Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases

So here, Sam Altman is stating a death threat.

◧◩◪
711. stale2+TN1[view] [source] [discussion] 2024-05-18 16:07:06
>>sneak+YL
Right now, there is some publicity on Twitter regarding AGI/OpenAI/EA LSD cnc parties (consent non consent/simulated rape parties).

So maybe it's related to that.

https://twitter.com/soniajoseph_/status/1791604177581310234

◧◩◪◨
725. marcus+eQ1[view] [source] [discussion] 2024-05-18 16:33:45
>>justin+dO1
I'm sure businesses will capture some of the value, but is there any reason to assume they'll capture all or even most of it?

Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].

It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!

Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.

[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N

◧◩◪◨⬒⬓
744. bookaw+KS1[view] [source] [discussion] 2024-05-18 17:02:27
>>nar001+ng1
The story of the "YC mafia" takeover of Conde Nast era reddit as summarized by ex-ceo Yishan who resigned after tiring of Altman's constant Machiavelli machinations is also hilarious and foreshadowing of future events[0]. I'm sure by the time Altman resigned from the Reddit board OpenAI had long incorporated the entire corpus into ChatGPT already.

At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:

If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.

A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.

[0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

[1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...

◧◩
764. rKarpi+vW1[view] [source] [discussion] 2024-05-18 17:42:21
>>milank+QG1
They have multiple NDA's, including ones that are signed before joining the company [1].

[1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s...

◧◩◪◨⬒
779. reduce+I32[view] [source] [discussion] 2024-05-18 18:40:24
>>LtWorf+RT1
That sounds like the word problems that are on American AP Calc tests. You can be the judge of them here: https://apcentral.collegeboard.org/media/pdf/ap24-frq-calcul...
◧◩◪◨
780. MacsHe+M32[view] [source] [discussion] 2024-05-18 18:41:31
>>stale2+TN1
The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement. https://x.com/youraimarketer/status/1791616629912051968

Anyway it's about not disparaging the company not about disclosing what employees do in their free time. Orgies are just parties and LSD use is hardly taboo.

◧◩◪
787. wwwest+552[view] [source] [discussion] 2024-05-18 18:51:48
>>staunt+nV1
The system already has been a superorganism/AI for a long time:

http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...

797. shon+r72[view] [source] 2024-05-18 19:13:03
>>fnbr+(OP)
The article mentions it briefly but Jan Leike, is talking: Reference: https://x.com/janleike/status/1791498174659715494?s=46&t=pO4...

He clearly states why he left. He believes that OpenAI leadership is prioritizing shiny product releases over safety and that this is a mistake.

Even with the best intentions , it’s easy for a strong CEO like Altman to loose sight of more subtly important things like safety and optimize for growth and winning, eventually at all cost. Winning is a super-addictive feedback loop.

◧◩◪◨⬒
810. stale2+qa2[view] [source] [discussion] 2024-05-18 19:42:15
>>MacsHe+M32
> Orgies are just parties

Well apparently not if there are women who are saying that the scene and community that all these people are involved in is making women uncomfortable or causing them to be harassed or pressured into bad situations.

A situation can be bad, done informally by people within a community, even if it isn't done literally within the corporate headquarters, or if directly the responsibility of one specific company that can be pointed at.

Especially if it is a close-nit group of people who are living together, working together, involved in the same out of work organizations and non profits.

You can read what Sonia says herself.

https://x.com/soniajoseph_/status/1791604177581310234

> The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement.

Indeed, I am sure that the people who are comfortable with the behavior or situation have no need to be pressured into silence.

◧◩◪◨⬒⬓⬔⧯▣▦
818. loceng+qe2[view] [source] [discussion] 2024-05-18 20:12:27
>>Animal+g92
At least in some parts of the world and at least a year ago the chemtrail-cloud seeding ramped up considerably.

Dane Wiginton (https://www.instagram.com/DaneWigington) is the founder of GeoengineerWatch.org as a very deep resource.

They have a free documentary called "The Dimming" you can watch on YouTube: https://www.youtube.com/watch?v=rf78rEAJvhY

In the documentary it includes credible witness testimonies such as politicians including a previous Minister of Defense for Canada; multiple states in the US have ban the spraying now - with more to follow, and the testimony and data provided there will be arguably be the most recent.

Here's a video on a "comedy" show from 5 years ago - there is a more recent appearance but I can't find it - in attempt to make light of it, without having an actual discussion with critical thinking or debate so people can be enlightened with the actual problems and potential problems and harms it can cause, to keep them none the wiser - it's just propaganda while trying to minimize: https://www.youtube.com/watch?v=wOfm5xYgiK0

A few of the problems cloud seeding will cause: - flooding in regions due to rain pattern changes - drought in areas due to rain pattern changes - cloud cover (amount of sun) changes crop yields - this harms local economies of farmers, impacting smaller farming operations more who's risk isn't spread out - potentially forcing them to sell or go into savings or go bankrupt, etc.

There are also very serious concerns/claims made of what exactly they are spraying - which includes aluminium nanoparticles, which can/would mean: - at a certain soil concentration of aluminium plants stop bearing fruit, - aluminium is a fire accelerant and so forest fires will then 1) more easily catch, and 2) more easily-quickly spread due to their increased intensity

Of course discussion on this is heavily suppressed in the mainstream, instead of having deep-thorough conversation with actual experts to present their cases - the label of conspiracy theorists or the idea of "detached from reality" are people's knee-jerk reactions often; and where propaganda can convince them of the "save the planet" narrative, which could also be a cover story for those toeing the line following orders supporting potentially very nefarious plans - doing it blindly because they think they're helping fight "climate change."

There are plenty of accounts on social media that are keeping track of and posting daily of the cloud seeding operations: https://www.instagram.com/p/CjNjAROPFs0/ - a couple testimonies.

827. tim333+rl2[view] [source] 2024-05-18 21:10:30
>>fnbr+(OP)
Sama update on X, says sorry:

>in regards to recent stuff about how openai handles equity:

>we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

>there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

>the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this. https://x.com/sama/status/1791936857594581428

◧◩◪◨
840. 0xDEAF+1y2[view] [source] [discussion] 2024-05-18 23:25:19
>>insane+hs2
I've been disparaging Sam and OpenAI a fair amount in this thread, but I find it plausible that Sam didn't know.

I remember a job a few years ago where they sent me employment paperwork that was for a very different position than the one I was hired for. (I ended up signing it anyways after a few minor changes, because I liked it better than the paperwork I expected to see.)

If OpenAI is a "move fast and break things" sort of organization, I expect they're shuffling a lot of paperwork that Sam isn't co-signing on. I doubt Sam's attitude towards paperwork is fundamentally different from yours or mine.

If Sam didn't know, however, that doesn't exactly reflect well on OpenAI. As Jan put it: "Act with gravitas appropriate for what you're building." >>40391412 IMO this incident should underscore Jan's point.

Accidentally silencing ex-employees is not what safety culture looks like at all. They've got to start hiring experts and reading books. It's a long slog ahead.

◧◩◪◨⬒⬓
843. xpe+VF2[view] [source] [discussion] 2024-05-19 01:22:27
>>candid+5S
It isn't a quantitative model unless you give a prediction of some kind. In this case, dates (or date ranges) would make sense.

1. When do you predict catastrophic global warming/climate change? How do you define "catastrophic"? (Are you pegging to an average temperature increase? [1])

2. When do you predict AGI?

How much uncertainty do you have in each estimate? When you stop and think about it, are you really willing to wager that (1) will happen before (2)? You think you have enough data to make that bet?

[1] I'm not an expert in the latest recommendations, but I see that a +2.7°F increase over preindustrial levels by 2100 is a target by some: https://news.mit.edu/2023/explained-climate-benchmark-rising...

◧◩◪◨⬒⬓⬔⧯▣▦
844. _heimd+pH2[view] [source] [discussion] 2024-05-19 01:44:08
>>balder+f62
If I'm not mistaken, at least in the US most brokers also aren't the titled owner. That falls to Cede & Co which acts as a securities depository.

This is where the waters get murky and really risk conspiracy theory. My understanding, though, is that the legal rights fall to the titled owner and financial institutions, with the legal benefactor having very little recourse should anything actually go wrong.

The Great Taking [1] goes into more detail, though importantly I'm only including it here as a related resource if anyone is interested to read more. The ideas are really interesting and, at least in isolation, do make logical sense to me but I haven't had time to do my own digging deep enough to really feel confident enough to stand behind everything The Great Taking argues.

[1] https://thegreattaking.com/

◧◩◪◨⬒⬓⬔⧯▣
863. eru+Kr3[view] [source] [discussion] 2024-05-19 12:05:53
>>p1esk+cR2
If they have even moderately clever lawyers and accountants, yes.

See eg https://en.wikipedia.org/wiki/Shareholder_rights_plan also known as a 'Poison Pill' to give you inspiration for one example.

◧◩◪
875. diebef+N74[view] [source] [discussion] 2024-05-19 18:56:53
>>season+ri2
Please read Apple's Employee Stock Plan Agreement, in particular Section 9.

https://www.sec.gov/Archives/edgar/data/320193/0001193125220...

It has happened in several cases involving leakers, most recently the Andrew Aude case.

◧◩◪◨⬒⬓⬔⧯
878. loceng+wb4[view] [source] [discussion] 2024-05-19 19:32:07
>>awesom+PN3
My other comment will link you to plenty of resources: >>40378842
◧◩◪◨⬒⬓⬔⧯▣▦
886. loceng+fK4[view] [source] [discussion] 2024-05-20 01:56:23
>>awesom+5x4
My bad! Sorry, not sure how that happened - here it is: >>40401703
[go to top]