zlacker

[return to "Inside The Chaos at OpenAI"]
1. simonw+o7[view] [source] 2023-11-20 03:19:42
>>maxuti+(OP)
This is really good journalism. There are a ton of interesting details in here that haven't been reported elsewhere, and it has all of the hallmarks of being well researched and sourced.

The first clue is this: "In conversations between The Atlantic and 10 current and former employees at OpenAI..."

When you're reporting something like this, especially when using anonymous sources (not anonymous to you, but sources that have good reasons not to want their names published), you can't just trust what someone tells you - they may have their own motives for presenting things in a certain way, or they may just be straight up lying.

So... you confirm what they are saying with other sources. That's why "10 current and former employees" is mentioned explicitly in the article.

Being published in the Atlantic helps too, because that's a publication with strong editorial integrity and a great track record.

◧◩
2. singul+ul[view] [source] 2023-11-20 05:28:07
>>simonw+o7
IDK it starts rather nonsensically:

    Sam Altman, the figurehead of the generative-AI revolution, 
   —one must understand that OpenAI is not a technology company.
EDIT: despite of the poor phrasing I agree that the article as a whole is of high quality

Yet: "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?

◧◩◪
3. dkjaud+484[view] [source] 2023-11-21 00:55:09
>>singul+ul
"Zealous doomers" seems fair in the context of the vague and melodramatic claims they're pushing. But it makes sense because they're describing the threat of something that doesn't exist and may never exist. What is bad is that they are trying to claim that the threat is real and serious on that basis.
◧◩◪◨
4. crooke+Lc4[view] [source] 2023-11-21 01:26:18
>>dkjaud+484
Personally, I feel like the risks of future AI developments are real, but none of the stuff I've seen OpenAI do so far has made ChatGPT actually feel "safer" (in a sense of e.g., preventing unhealthy parasocial relationships with the system, actually being helpful when it comes to ethical conflicts, etc), just more stuck-up and excessively moralizing in a way that feels 100% tuned for bland corporate PR bot usage.
[go to top]