zlacker

[parent] [thread] 45 comments
1. Pet_An+(OP)[view] [source] 2024-05-15 14:13:54
In case people haven't noticed, this is the second resignation in as many days.

>>40361128

replies(4): >>the_mi+53 >>pfist+93 >>hn_thr+95 >>dawood+ei
2. the_mi+53[view] [source] 2024-05-15 14:27:56
>>Pet_An+(OP)
And entirely predictable from the first one: https://openai.com/index/introducing-superalignment/
replies(2): >>uLogMi+C3 >>btown+66
3. pfist+93[view] [source] 2024-05-15 14:28:10
>>Pet_An+(OP)
I have noticed, and I am concerned that they were the leaders of the Superalignment team.
replies(4): >>zer00e+w4 >>transc+05 >>treme+R5 >>dontup+57
◧◩
4. uLogMi+C3[view] [source] [discussion] 2024-05-15 14:30:12
>>the_mi+53
Imagine trying to keep something so far above us in intelligence, caged. Scary stuff...
replies(1): >>binary+Ui
◧◩
5. zer00e+w4[view] [source] [discussion] 2024-05-15 14:35:04
>>pfist+93
Sam Altman superalinged them right out the door...
◧◩
6. transc+05[view] [source] [discussion] 2024-05-15 14:37:55
>>pfist+93
On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.
replies(4): >>dontup+k7 >>llamai+U8 >>scarmi+V8 >>HarHar+0n
7. hn_thr+95[view] [source] 2024-05-15 14:38:27
>>Pet_An+(OP)
They resigned together on the same day - people are just announcing this like it's some type of "drip drip" of people leaving to build suspense.

While Jan's (very pithy) tweet was later in the evening, I was reading other posts yesterday at the time of Ilya's announcement saying that Jan was also leaving.

◧◩
8. treme+R5[view] [source] [discussion] 2024-05-15 14:41:22
>>pfist+93
Reads like beginnings of a good dystopian movie script
◧◩
9. btown+66[view] [source] [discussion] 2024-05-15 14:42:51
>>the_mi+53
Makes me wonder if that 20% compute commitment to superalignment research was walked back (or redesigned so as to be distant from the original mission). Or, perhaps the two deemed that even more commitment was necessary, and were dissatisfied with Altman's response.

Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.

replies(1): >>dontup+G7
◧◩
10. dontup+57[view] [source] [discussion] 2024-05-15 14:47:35
>>pfist+93
Turns out we already have alignment, it's called capitalism.
replies(3): >>mbgerr+Pa >>throwa+rq1 >>cyanyd+FA4
◧◩◪
11. dontup+k7[view] [source] [discussion] 2024-05-15 14:48:36
>>transc+05
One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.
◧◩◪
12. dontup+G7[view] [source] [discussion] 2024-05-15 14:50:06
>>btown+66
Ironically Microsoft is the one that's notoriously terrible at checking their "AI" products before releasing them.

Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours

replies(3): >>buildb+vj >>fzzzy+NM >>jerjer+XK1
◧◩◪
13. llamai+U8[view] [source] [discussion] 2024-05-15 14:55:24
>>transc+05
Ah yes, a scientist refusing to work on the hydrogen bomb couldn't have been all that concerned about it.
◧◩◪
14. scarmi+V8[view] [source] [discussion] 2024-05-15 14:55:26
>>transc+05
If your ostensible purpose is being sidelined by decision makers, trying to fight back is often a good option, but sometimes you fail. Admitting failure and focusing on other approaches is the right choice at that point.
◧◩◪
15. mbgerr+Pa[view] [source] [discussion] 2024-05-15 15:05:05
>>dontup+57
This is true and we do not talk about it enough. Moreover, Capitalism is itself an unaligned AI, and understanding it through that lens clarifies a great deal.
replies(5): >>lowmag+XJ >>panark+s41 >>exolym+L91 >>sgregn+9i1 >>sudosy+kd2
16. dawood+ei[view] [source] 2024-05-15 15:38:15
>>Pet_An+(OP)
A few more people involved in the alignment efforts have left recently: https://x.com/ShakeelHashim/status/1790685752134656371
◧◩◪
17. binary+Ui[view] [source] [discussion] 2024-05-15 15:41:16
>>uLogMi+C3
I’m genuinely curious - do you actually believe that GPT is a super intelligence? Because I have the opposite experience. It consistently fails to be correct on following even the most basic instructions. For a little while I thought maybe I’m doing it wrong, and I need better prompts, but then I realized that its zero shot and few shot capabilities are really hit and miss. Furthermore, a superior intelligence shouldn’t need us to conform to its persnickety requirements, and it should be able to adapt far better than it actually does.
replies(1): >>uLogMi+WA
◧◩◪◨
18. buildb+vj[view] [source] [discussion] 2024-05-15 15:43:58
>>dontup+G7
As an MS employee working on LLMs, that entire saga is super weird. We need approval for everything! Releasing anything without approval is quite weird.

We can’t just drop papers on arxiv. There is no way running your own twitter, github, etc as a separate group allowed.

I checked fairly recently to see if the model was actually released again, it doesn’t seem to be; I find this telling.

◧◩◪
19. HarHar+0n[view] [source] [discussion] 2024-05-15 15:58:53
>>transc+05
The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].

Now, we've have the co-leads of the super-alignment/safety team leaving too.

Certainly not a good look for OpenAI.

There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.

replies(1): >>koe123+Pg2
◧◩◪◨
20. uLogMi+WA[view] [source] [discussion] 2024-05-15 17:00:21
>>binary+Ui
GPT does not need super-alignment. This refers to aligning artificial general and super intelligence.
◧◩◪◨
21. lowmag+XJ[view] [source] [discussion] 2024-05-15 17:43:29
>>mbgerr+Pa
oh no, it's just a real world reinforcement model
◧◩◪◨
22. fzzzy+NM[view] [source] [discussion] 2024-05-15 17:57:48
>>dontup+G7
I was able to download a copy of that before they took it down. Silly.
replies(1): >>dontup+GX2
◧◩◪◨
23. panark+s41[view] [source] [discussion] 2024-05-15 19:35:08
>>mbgerr+Pa
People experience existential terror from AI because it feels like massive, pervasive, implacable forces that we can't understand or control, with the potential to do great harm to our personal lives and to larger social and political systems, where we have zero power to stop it or avoid it or redirect it. Forces that benefit a few at the expense of the many.

What many of us are actually experiencing is existential terror about capitalism itself, but we don't have the conceptual framework or vocabulary to describe it that way.

It's a cognitive shortcut to look for a definable villain to blame for our fear, and historically that's taken the form of antisemitism, anti-migrant, anti-homeless, even ironically anti-communist, and we see similar corrupted forms of blame in antivax and anti-globalist conspiracy thinking, from both the left and the right.

While there are genuine x-risk hazards from AI, it seems like a lot of the current fear is really a corrupted and misplaced fear of having zero control over the foreboding and implacable forces of capitalism itself.

AI is hypercapitalism and that is terrifying.

replies(4): >>kridsd+5u1 >>raluse+eC1 >>hi-v-r+cT1 >>sudosy+xd2
◧◩◪◨
24. exolym+L91[view] [source] [discussion] 2024-05-15 20:03:20
>>mbgerr+Pa
Nick Land type beat
◧◩◪◨
25. sgregn+9i1[view] [source] [discussion] 2024-05-15 20:49:42
>>mbgerr+Pa
You mean freedom is an analigned AI?
◧◩◪
26. throwa+rq1[view] [source] [discussion] 2024-05-15 21:37:53
>>dontup+57
How does capitalism work if there aren’t any workers to buy the products made by the capitalists? Not being argumentative here, I really want to know.
replies(5): >>Atotal+Nq1 >>lrvick+Iv1 >>Samoye+Tz1 >>imposs+tI1 >>austhr+t92
◧◩◪◨
27. Atotal+Nq1[view] [source] [discussion] 2024-05-15 21:40:39
>>throwa+rq1
Honestly don’t know if these kinds of people have thought that far ahead
◧◩◪◨⬒
28. kridsd+5u1[view] [source] [discussion] 2024-05-15 22:00:36
>>panark+s41
Ted Chiang on the Ezra Klein podcast said basically the same thing:

AI Doomerism is actually capitalist anxiety.

replies(1): >>Michae+4l2
◧◩◪◨
29. lrvick+Iv1[view] [source] [discussion] 2024-05-15 22:12:31
>>throwa+rq1
All that matters are quarterly profits.
◧◩◪◨
30. Samoye+Tz1[view] [source] [discussion] 2024-05-15 22:46:49
>>throwa+rq1
The machines can buy the products. We already have HFT, which obviously has little to do with actual products people are buying or selling. Just number go up/down.
replies(1): >>single+mE1
◧◩◪◨⬒
31. raluse+eC1[view] [source] [discussion] 2024-05-15 23:05:57
>>panark+s41
Different words with different meanings mean different things. A communist country could and would produce AI, and it would still be scary.
replies(2): >>yfw+kX1 >>yfw+yX1
◧◩◪◨⬒
32. single+mE1[view] [source] [discussion] 2024-05-15 23:28:27
>>Samoye+Tz1
If a machine buys a product from me and does not pay, whom should I sue?

That is the person who actually made the purchase.

◧◩◪◨
33. imposs+tI1[view] [source] [discussion] 2024-05-16 00:07:24
>>throwa+rq1
The way it works in any country where workers can't afford to buy the products today, so I imagine as in those countries that function most like the stereotypical African developing country.

So I imagine that the result would be that industry devolved into the manufacturing of luxury products, in the style of the top-class products of ancient Rome.

◧◩◪◨
34. jerjer+XK1[view] [source] [discussion] 2024-05-16 00:30:36
>>dontup+G7
Sydney was their best "lest just release it without guardrails" bot.

Tay way trivially racist, but boy was Sydney a wacko.

◧◩◪◨⬒
35. hi-v-r+cT1[view] [source] [discussion] 2024-05-16 01:57:20
>>panark+s41
tl;dr: Fear of the unknown. The problem is more and more people don't know anything about anything, and so are prone to rejecting and retaliating against they don't understand while not making any effort to understand before forming an emotionally-based opinion.
◧◩◪◨⬒⬓
36. yfw+kX1[view] [source] [discussion] 2024-05-16 02:51:49
>>raluse+eC1
That's because most communist countries are closer to authoritarian dictatorship than hippie commune.
◧◩◪◨⬒⬓
37. yfw+yX1[view] [source] [discussion] 2024-05-16 02:53:57
>>raluse+eC1
That's because most communist countries are closer to authoritarian dictatorship than Starfleet
◧◩◪◨
38. austhr+t92[view] [source] [discussion] 2024-05-16 05:33:59
>>throwa+rq1
Transfer payments, rent, dividends would provide income. People would then use it to buy things just like they do now.
◧◩◪◨
39. sudosy+kd2[view] [source] [discussion] 2024-05-16 06:25:36
>>mbgerr+Pa
This is a pretty old idea, which dates back to the study of capitalism itself. Here's some articles on it : https://harvardichthus.org/2013/10/what-gods-we-worship-capi... and https://ianwrightsite.wordpress.com/2020/09/03/marx-on-capit...
◧◩◪◨⬒
40. sudosy+xd2[view] [source] [discussion] 2024-05-16 06:29:19
>>panark+s41
Well, we do have a conceptual framework and vocabulary for massive, pervasive and implacable forces beyond our understanding - it's the framework and vocabulary of religion and the occult. It has actually been used to describe capitalism essentially since capitalism itself, and it's been used explicitly as a framework to analyze it at least since Deleuze. Arguably, since Marx : as far as I'm aware, he was the first to personalize capital as an actor in and of itself.
◧◩◪◨
41. koe123+Pg2[view] [source] [discussion] 2024-05-16 07:17:11
>>HarHar+0n
On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.
replies(1): >>HarHar+PN2
◧◩◪◨⬒⬓
42. Michae+4l2[view] [source] [discussion] 2024-05-16 08:18:46
>>kridsd+5u1
Probably not even that specific, more like an underlying fear that 8 billion people interacting in a complex system will forever be beyond the human capacity to grasp.

Which is likely true.

replies(1): >>cyanyd+gB4
◧◩◪◨⬒
43. HarHar+PN2[view] [source] [discussion] 2024-05-16 13:14:56
>>koe123+Pg2
I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.

From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.

Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.

◧◩◪◨⬒
44. dontup+GX2[view] [source] [discussion] 2024-05-16 14:03:44
>>fzzzy+NM
Yeah it was already mirrored pretty quickly. I expect enough people are now running cronjobs to archive whitelists of HF pages and auto-cloning anything that gets pushed out.
◧◩◪
45. cyanyd+FA4[view] [source] [discussion] 2024-05-17 00:04:50
>>dontup+57
Yes, thos is definitely the signal that capitalism eill determine the value.of AI.

SAME way.google search is now a steaming garbage.pile.

◧◩◪◨⬒⬓⬔
46. cyanyd+gB4[view] [source] [discussion] 2024-05-17 00:09:45
>>Michae+4l2
So, this has happened multiple times. Its best case.example.is.eugenics, where "intellectuals" believe.they can degermine what.the best traits are.in a.complex system and prune sociery to achieve some perfect outcomr.

The peoblrm, of course, is the sysyrm is complex, filled with hidden variables.and humans will tend to focus entirrly on phenotypes which are the easiest to observe.

Thesr modrls will do the same humanbbiased selection and grabitateb to a substatially vapid mean.

[go to top]