zlacker

[parent] [thread] 32 comments
1. pfist+(OP)[view] [source] 2024-05-15 14:28:10
I have noticed, and I am concerned that they were the leaders of the Superalignment team.
replies(4): >>zer00e+n1 >>transc+R1 >>treme+I2 >>dontup+W3
2. zer00e+n1[view] [source] 2024-05-15 14:35:04
>>pfist+(OP)
Sam Altman superalinged them right out the door...
3. transc+R1[view] [source] 2024-05-15 14:37:55
>>pfist+(OP)
On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.
replies(4): >>dontup+b4 >>llamai+L5 >>scarmi+M5 >>HarHar+Rj
4. treme+I2[view] [source] 2024-05-15 14:41:22
>>pfist+(OP)
Reads like beginnings of a good dystopian movie script
5. dontup+W3[view] [source] 2024-05-15 14:47:35
>>pfist+(OP)
Turns out we already have alignment, it's called capitalism.
replies(3): >>mbgerr+G7 >>throwa+in1 >>cyanyd+wx4
◧◩
6. dontup+b4[view] [source] [discussion] 2024-05-15 14:48:36
>>transc+R1
One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.
◧◩
7. llamai+L5[view] [source] [discussion] 2024-05-15 14:55:24
>>transc+R1
Ah yes, a scientist refusing to work on the hydrogen bomb couldn't have been all that concerned about it.
◧◩
8. scarmi+M5[view] [source] [discussion] 2024-05-15 14:55:26
>>transc+R1
If your ostensible purpose is being sidelined by decision makers, trying to fight back is often a good option, but sometimes you fail. Admitting failure and focusing on other approaches is the right choice at that point.
◧◩
9. mbgerr+G7[view] [source] [discussion] 2024-05-15 15:05:05
>>dontup+W3
This is true and we do not talk about it enough. Moreover, Capitalism is itself an unaligned AI, and understanding it through that lens clarifies a great deal.
replies(5): >>lowmag+OG >>panark+j11 >>exolym+C61 >>sgregn+0f1 >>sudosy+ba2
◧◩
10. HarHar+Rj[view] [source] [discussion] 2024-05-15 15:58:53
>>transc+R1
The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].

Now, we've have the co-leads of the super-alignment/safety team leaving too.

Certainly not a good look for OpenAI.

There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.

replies(1): >>koe123+Gd2
◧◩◪
11. lowmag+OG[view] [source] [discussion] 2024-05-15 17:43:29
>>mbgerr+G7
oh no, it's just a real world reinforcement model
◧◩◪
12. panark+j11[view] [source] [discussion] 2024-05-15 19:35:08
>>mbgerr+G7
People experience existential terror from AI because it feels like massive, pervasive, implacable forces that we can't understand or control, with the potential to do great harm to our personal lives and to larger social and political systems, where we have zero power to stop it or avoid it or redirect it. Forces that benefit a few at the expense of the many.

What many of us are actually experiencing is existential terror about capitalism itself, but we don't have the conceptual framework or vocabulary to describe it that way.

It's a cognitive shortcut to look for a definable villain to blame for our fear, and historically that's taken the form of antisemitism, anti-migrant, anti-homeless, even ironically anti-communist, and we see similar corrupted forms of blame in antivax and anti-globalist conspiracy thinking, from both the left and the right.

While there are genuine x-risk hazards from AI, it seems like a lot of the current fear is really a corrupted and misplaced fear of having zero control over the foreboding and implacable forces of capitalism itself.

AI is hypercapitalism and that is terrifying.

replies(4): >>kridsd+Wq1 >>raluse+5z1 >>hi-v-r+3Q1 >>sudosy+oa2
◧◩◪
13. exolym+C61[view] [source] [discussion] 2024-05-15 20:03:20
>>mbgerr+G7
Nick Land type beat
◧◩◪
14. sgregn+0f1[view] [source] [discussion] 2024-05-15 20:49:42
>>mbgerr+G7
You mean freedom is an analigned AI?
◧◩
15. throwa+in1[view] [source] [discussion] 2024-05-15 21:37:53
>>dontup+W3
How does capitalism work if there aren’t any workers to buy the products made by the capitalists? Not being argumentative here, I really want to know.
replies(5): >>Atotal+En1 >>lrvick+zs1 >>Samoye+Kw1 >>imposs+kF1 >>austhr+k62
◧◩◪
16. Atotal+En1[view] [source] [discussion] 2024-05-15 21:40:39
>>throwa+in1
Honestly don’t know if these kinds of people have thought that far ahead
◧◩◪◨
17. kridsd+Wq1[view] [source] [discussion] 2024-05-15 22:00:36
>>panark+j11
Ted Chiang on the Ezra Klein podcast said basically the same thing:

AI Doomerism is actually capitalist anxiety.

replies(1): >>Michae+Vh2
◧◩◪
18. lrvick+zs1[view] [source] [discussion] 2024-05-15 22:12:31
>>throwa+in1
All that matters are quarterly profits.
◧◩◪
19. Samoye+Kw1[view] [source] [discussion] 2024-05-15 22:46:49
>>throwa+in1
The machines can buy the products. We already have HFT, which obviously has little to do with actual products people are buying or selling. Just number go up/down.
replies(1): >>single+dB1
◧◩◪◨
20. raluse+5z1[view] [source] [discussion] 2024-05-15 23:05:57
>>panark+j11
Different words with different meanings mean different things. A communist country could and would produce AI, and it would still be scary.
replies(2): >>yfw+bU1 >>yfw+pU1
◧◩◪◨
21. single+dB1[view] [source] [discussion] 2024-05-15 23:28:27
>>Samoye+Kw1
If a machine buys a product from me and does not pay, whom should I sue?

That is the person who actually made the purchase.

◧◩◪
22. imposs+kF1[view] [source] [discussion] 2024-05-16 00:07:24
>>throwa+in1
The way it works in any country where workers can't afford to buy the products today, so I imagine as in those countries that function most like the stereotypical African developing country.

So I imagine that the result would be that industry devolved into the manufacturing of luxury products, in the style of the top-class products of ancient Rome.

◧◩◪◨
23. hi-v-r+3Q1[view] [source] [discussion] 2024-05-16 01:57:20
>>panark+j11
tl;dr: Fear of the unknown. The problem is more and more people don't know anything about anything, and so are prone to rejecting and retaliating against they don't understand while not making any effort to understand before forming an emotionally-based opinion.
◧◩◪◨⬒
24. yfw+bU1[view] [source] [discussion] 2024-05-16 02:51:49
>>raluse+5z1
That's because most communist countries are closer to authoritarian dictatorship than hippie commune.
◧◩◪◨⬒
25. yfw+pU1[view] [source] [discussion] 2024-05-16 02:53:57
>>raluse+5z1
That's because most communist countries are closer to authoritarian dictatorship than Starfleet
◧◩◪
26. austhr+k62[view] [source] [discussion] 2024-05-16 05:33:59
>>throwa+in1
Transfer payments, rent, dividends would provide income. People would then use it to buy things just like they do now.
◧◩◪
27. sudosy+ba2[view] [source] [discussion] 2024-05-16 06:25:36
>>mbgerr+G7
This is a pretty old idea, which dates back to the study of capitalism itself. Here's some articles on it : https://harvardichthus.org/2013/10/what-gods-we-worship-capi... and https://ianwrightsite.wordpress.com/2020/09/03/marx-on-capit...
◧◩◪◨
28. sudosy+oa2[view] [source] [discussion] 2024-05-16 06:29:19
>>panark+j11
Well, we do have a conceptual framework and vocabulary for massive, pervasive and implacable forces beyond our understanding - it's the framework and vocabulary of religion and the occult. It has actually been used to describe capitalism essentially since capitalism itself, and it's been used explicitly as a framework to analyze it at least since Deleuze. Arguably, since Marx : as far as I'm aware, he was the first to personalize capital as an actor in and of itself.
◧◩◪
29. koe123+Gd2[view] [source] [discussion] 2024-05-16 07:17:11
>>HarHar+Rj
On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.
replies(1): >>HarHar+GK2
◧◩◪◨⬒
30. Michae+Vh2[view] [source] [discussion] 2024-05-16 08:18:46
>>kridsd+Wq1
Probably not even that specific, more like an underlying fear that 8 billion people interacting in a complex system will forever be beyond the human capacity to grasp.

Which is likely true.

replies(1): >>cyanyd+7y4
◧◩◪◨
31. HarHar+GK2[view] [source] [discussion] 2024-05-16 13:14:56
>>koe123+Gd2
I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.

From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.

Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.

◧◩
32. cyanyd+wx4[view] [source] [discussion] 2024-05-17 00:04:50
>>dontup+W3
Yes, thos is definitely the signal that capitalism eill determine the value.of AI.

SAME way.google search is now a steaming garbage.pile.

◧◩◪◨⬒⬓
33. cyanyd+7y4[view] [source] [discussion] 2024-05-17 00:09:45
>>Michae+Vh2
So, this has happened multiple times. Its best case.example.is.eugenics, where "intellectuals" believe.they can degermine what.the best traits are.in a.complex system and prune sociery to achieve some perfect outcomr.

The peoblrm, of course, is the sysyrm is complex, filled with hidden variables.and humans will tend to focus entirrly on phenotypes which are the easiest to observe.

Thesr modrls will do the same humanbbiased selection and grabitateb to a substatially vapid mean.

[go to top]