zlacker

[return to "Claude is a space to think"]
1. Johnny+mE[view] [source] 2026-02-04 15:55:30
>>meetpa+(OP)
I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

It appears they trend in the right direction:

- Have not kissed the Ring.

- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).

- Committing to no ads.

- Willing to risk defense department contract over objections to use for lethal operations [1]

The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])

It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.

I'm curious, how do others here think about Anthropic?

[1]https://archive.is/Pm2QS

[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...

[3]https://investors.palantir.com/news-details/2024/Anthropic-a...

[4]https://archive.is/4NGBE

◧◩
2. Jayaku+ld1[view] [source] 2026-02-04 18:24:59
>>Johnny+mE
They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.

https://www.anthropic.com/news/anthropic-s-recommendations-o...

Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.

◧◩◪
3. Epitaq+ig1[view] [source] 2026-02-04 18:36:29
>>Jayaku+ld1
For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
◧◩◪◨
4. thenew+xn1[view] [source] 2026-02-04 19:06:51
>>Epitaq+ig1
I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:

LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.

Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.

This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".

[go to top]