zlacker

[parent] [thread] 5 comments
1. singul+(OP)[view] [source] 2023-11-20 05:28:07
IDK it starts rather nonsensically:

    Sam Altman, the figurehead of the generative-AI revolution, 
   —one must understand that OpenAI is not a technology company.
EDIT: despite of the poor phrasing I agree that the article as a whole is of high quality

Yet: "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?

replies(3): >>simonw+w >>dkjaud+AM3 >>JohnFe+ey7
2. simonw+w[view] [source] 2023-11-20 05:30:22
>>singul+(OP)
That makes a lot more sense in context:

> To truly understand the events of the past 48 hours—the shocking, sudden ousting of OpenAI’s CEO, Sam Altman, arguably the figurehead of the generative-AI revolution, followed by reports that the company is now in talks to bring him back—one must understand that OpenAI is not a technology company. At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft.

The key piece here is "At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft." - which then gets into the weird structure of OpenAI as a non-profit, which is indeed crucial to understanding what has happened over the weekend.

This is good writing. The claim that "OpenAI is not a technology company" in the opening paragraph of the story instantly grabs your attention and makes you ask why they would say that... a question which they then answer in the next few sentences.

3. dkjaud+AM3[view] [source] 2023-11-21 00:55:09
>>singul+(OP)
"Zealous doomers" seems fair in the context of the vague and melodramatic claims they're pushing. But it makes sense because they're describing the threat of something that doesn't exist and may never exist. What is bad is that they are trying to claim that the threat is real and serious on that basis.
replies(2): >>crooke+hR3 >>Terrif+C04
◧◩
4. crooke+hR3[view] [source] [discussion] 2023-11-21 01:26:18
>>dkjaud+AM3
Personally, I feel like the risks of future AI developments are real, but none of the stuff I've seen OpenAI do so far has made ChatGPT actually feel "safer" (in a sense of e.g., preventing unhealthy parasocial relationships with the system, actually being helpful when it comes to ethical conflicts, etc), just more stuck-up and excessively moralizing in a way that feels 100% tuned for bland corporate PR bot usage.
◧◩
5. Terrif+C04[view] [source] [discussion] 2023-11-21 02:21:54
>>dkjaud+AM3
> that doesn't exist and may never exist

It doesn’t exist until suddenly it does. I think there are a lot of potential issues we really should be preparing for / trying to solve.

For example, what to do about unemployment. We can’t wait until massive number of people start losing their job before we start working on what to do.

I’m not for slowing down AI research but I do think we need to restrict or slow the deployment of AI if the effects on society are problematic.

6. JohnFe+ey7[view] [source] 2023-11-21 22:58:50
>>singul+(OP)
> "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?

I think that "zealous doomers" refers to people who are afraid that this technology may result in some sort of Skynet situation, not those who are nervous about more realistic risks.

[go to top]