zlacker

[return to "Claude is a space to think"]
1. Johnny+mE[view] [source] 2026-02-04 15:55:30
>>meetpa+(OP)
I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

It appears they trend in the right direction:

- Have not kissed the Ring.

- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).

- Committing to no ads.

- Willing to risk defense department contract over objections to use for lethal operations [1]

The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])

It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.

I'm curious, how do others here think about Anthropic?

[1]https://archive.is/Pm2QS

[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...

[3]https://investors.palantir.com/news-details/2024/Anthropic-a...

[4]https://archive.is/4NGBE

◧◩
2. Jayaku+ld1[view] [source] 2026-02-04 18:24:59
>>Johnny+mE
They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.

https://www.anthropic.com/news/anthropic-s-recommendations-o...

Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.

◧◩◪
3. Epitaq+ig1[view] [source] 2026-02-04 18:36:29
>>Jayaku+ld1
For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
◧◩◪◨
4. 10xDev+lk1[view] [source] 2026-02-04 18:53:40
>>Epitaq+ig1
"please do all the work to argue my position so I don't have to".
◧◩◪◨⬒
5. Epitaq+Vm1[view] [source] 2026-02-04 19:04:09
>>10xDev+lk1
I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).

Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.

I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.

◧◩◪◨⬒⬓
6. Jayaku+gz1[view] [source] 2026-02-04 20:06:51
>>Epitaq+Vm1
Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.
[go to top]