zlacker

[parent] [thread] 10 comments
1. simonw+(OP)[view] [source] 2023-11-20 03:19:42
This is really good journalism. There are a ton of interesting details in here that haven't been reported elsewhere, and it has all of the hallmarks of being well researched and sourced.

The first clue is this: "In conversations between The Atlantic and 10 current and former employees at OpenAI..."

When you're reporting something like this, especially when using anonymous sources (not anonymous to you, but sources that have good reasons not to want their names published), you can't just trust what someone tells you - they may have their own motives for presenting things in a certain way, or they may just be straight up lying.

So... you confirm what they are saying with other sources. That's why "10 current and former employees" is mentioned explicitly in the article.

Being published in the Atlantic helps too, because that's a publication with strong editorial integrity and a great track record.

replies(5): >>tkgall+D6 >>singul+6e >>fortra+SJ1 >>chubot+H74 >>skybri+ih4
2. tkgall+D6[view] [source] 2023-11-20 04:30:39
>>simonw+(OP)
> This is really good journalism.

That was exactly my reaction. I’ve been following the news and rumors and speculation closely since Altman’s firing, and this is by far the most substantive account I have read. Kudos to the authors and to The Atlantic for getting it out so quickly.

3. singul+6e[view] [source] 2023-11-20 05:28:07
>>simonw+(OP)
IDK it starts rather nonsensically:

    Sam Altman, the figurehead of the generative-AI revolution, 
   —one must understand that OpenAI is not a technology company.
EDIT: despite of the poor phrasing I agree that the article as a whole is of high quality

Yet: "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?

replies(3): >>simonw+Ce >>dkjaud+G04 >>JohnFe+kM7
◧◩
4. simonw+Ce[view] [source] [discussion] 2023-11-20 05:30:22
>>singul+6e
That makes a lot more sense in context:

> To truly understand the events of the past 48 hours—the shocking, sudden ousting of OpenAI’s CEO, Sam Altman, arguably the figurehead of the generative-AI revolution, followed by reports that the company is now in talks to bring him back—one must understand that OpenAI is not a technology company. At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft.

The key piece here is "At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft." - which then gets into the weird structure of OpenAI as a non-profit, which is indeed crucial to understanding what has happened over the weekend.

This is good writing. The claim that "OpenAI is not a technology company" in the opening paragraph of the story instantly grabs your attention and makes you ask why they would say that... a question which they then answer in the next few sentences.

5. fortra+SJ1[view] [source] 2023-11-20 14:18:26
>>simonw+(OP)
And this is why I pay for journalism, including The Atlantic. Something people at Hacker News hate to do....
◧◩
6. dkjaud+G04[view] [source] [discussion] 2023-11-21 00:55:09
>>singul+6e
"Zealous doomers" seems fair in the context of the vague and melodramatic claims they're pushing. But it makes sense because they're describing the threat of something that doesn't exist and may never exist. What is bad is that they are trying to claim that the threat is real and serious on that basis.
replies(2): >>crooke+n54 >>Terrif+Ie4
◧◩◪
7. crooke+n54[view] [source] [discussion] 2023-11-21 01:26:18
>>dkjaud+G04
Personally, I feel like the risks of future AI developments are real, but none of the stuff I've seen OpenAI do so far has made ChatGPT actually feel "safer" (in a sense of e.g., preventing unhealthy parasocial relationships with the system, actually being helpful when it comes to ethical conflicts, etc), just more stuck-up and excessively moralizing in a way that feels 100% tuned for bland corporate PR bot usage.
8. chubot+H74[view] [source] 2023-11-21 01:42:07
>>simonw+(OP)
This is also a good 2020 article on OpenAI, by the same author Karen Hao:

The messy, secretive reality behind OpenAI’s bid to save the world

https://www.technologyreview.com/2020/02/17/844721/ai-openai...

The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.

Only 4 comments at the time: >>22351341

More comments on Reddit: https://old.reddit.com/r/MachineLearning/comments/f5immz/d_t...

◧◩◪
9. Terrif+Ie4[view] [source] [discussion] 2023-11-21 02:21:54
>>dkjaud+G04
> that doesn't exist and may never exist

It doesn’t exist until suddenly it does. I think there are a lot of potential issues we really should be preparing for / trying to solve.

For example, what to do about unemployment. We can’t wait until massive number of people start losing their job before we start working on what to do.

I’m not for slowing down AI research but I do think we need to restrict or slow the deployment of AI if the effects on society are problematic.

10. skybri+ih4[view] [source] 2023-11-21 02:41:12
>>simonw+(OP)
I think it's because one of the authors is writing a book about OpenAI. They were interviewing people before and already had a lot of contacts and context.
◧◩
11. JohnFe+kM7[view] [source] [discussion] 2023-11-21 22:58:50
>>singul+6e
> "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?

I think that "zealous doomers" refers to people who are afraid that this technology may result in some sort of Skynet situation, not those who are nervous about more realistic risks.

[go to top]