zlacker

[parent] [thread] 61 comments
1. Johnny+(OP)[view] [source] 2026-02-04 15:55:30
I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

It appears they trend in the right direction:

- Have not kissed the Ring.

- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).

- Committing to no ads.

- Willing to risk defense department contract over objections to use for lethal operations [1]

The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])

It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.

I'm curious, how do others here think about Anthropic?

[1]https://archive.is/Pm2QS

[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...

[3]https://investors.palantir.com/news-details/2024/Anthropic-a...

[4]https://archive.is/4NGBE

replies(16): >>skybri+93 >>marxis+fd >>mrdepe+7p >>cedws+ip >>adrian+Kp >>insane+rx >>Jayaku+Zy >>threet+yC >>drawfl+UC >>throwa+5F >>Zambyt+YJ >>fallou+8T >>rainco+By1 >>aglusz+Ty1 >>nilkn+sB1 >>b3ing+m42
2. skybri+93[view] [source] 2026-02-04 16:10:10
>>Johnny+(OP)
When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to “more good than bad” or “more bad than good” is kind of a fraught question. I think Anthropic is still in the “more good than bad” range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the “seems bad” column, and will likely do more. Also more good things, too.

They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.

3. marxis+fd[view] [source] 2026-02-04 16:52:09
>>Johnny+(OP)
I think I’m not allowed to say what I think should happen to anyone who works with Palantir.
replies(1): >>fragme+jD
4. mrdepe+7p[view] [source] 2026-02-04 17:45:28
>>Johnny+(OP)
Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.

Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.

replies(4): >>librar+OJ >>yoyohe+9M >>Johnny+sN >>qudat+nD1
5. cedws+ip[view] [source] 2026-02-04 17:46:14
>>Johnny+(OP)
Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.

[0]: >>46873708

replies(1): >>2001zh+k71
6. adrian+Kp[view] [source] 2026-02-04 17:48:04
>>Johnny+(OP)
> I'm curious, how do others here think about Anthropic?

I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.

7. insane+rx[view] [source] 2026-02-04 18:18:08
>>Johnny+(OP)
I don’t know about “good guys” but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.
8. Jayaku+Zy[view] [source] 2026-02-04 18:24:59
>>Johnny+(OP)
They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.

https://www.anthropic.com/news/anthropic-s-recommendations-o...

Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.

replies(2): >>Epitaq+WB >>skerit+3a1
◧◩
9. Epitaq+WB[view] [source] [discussion] 2026-02-04 18:36:29
>>Jayaku+Zy
For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
replies(3): >>10xDev+ZF >>thenew+bJ >>wavemo+gq1
10. threet+yC[view] [source] 2026-02-04 18:38:27
>>Johnny+(OP)
Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.
replies(1): >>mirekr+0L
11. drawfl+UC[view] [source] 2026-02-04 18:39:46
>>Johnny+(OP)
They work with the US military.
replies(1): >>mhb+bF
◧◩
12. fragme+jD[view] [source] [discussion] 2026-02-04 18:41:30
>>marxis+fd
Maybe you could use an LLM to clean up what you want to say
13. throwa+5F[view] [source] 2026-02-04 18:49:22
>>Johnny+(OP)
I am on the opposite side of what you are thinking.

- Blocking access to others (cursor, openai, opencode)

- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs

- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.

at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.

replies(4): >>derac+JI >>esbran+RJ >>mym199+MK >>signat+Yz1
◧◩
14. mhb+bF[view] [source] [discussion] 2026-02-04 18:49:32
>>drawfl+UC
Defending the US. So?
replies(3): >>cess11+nI >>drawfl+ZS >>spacec+F81
◧◩◪
15. 10xDev+ZF[view] [source] [discussion] 2026-02-04 18:53:40
>>Epitaq+WB
"please do all the work to argue my position so I don't have to".
replies(1): >>Epitaq+zI
◧◩◪
16. cess11+nI[view] [source] [discussion] 2026-02-04 19:03:14
>>mhb+bF
That's pretty bad.
replies(1): >>mhb+eK
◧◩◪◨
17. Epitaq+zI[view] [source] [discussion] 2026-02-04 19:04:09
>>10xDev+ZF
I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).

Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.

I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.

replies(1): >>Jayaku+UU
◧◩
18. derac+JI[view] [source] [discussion] 2026-02-04 19:05:02
>>throwa+5F
I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.
◧◩◪
19. thenew+bJ[view] [source] [discussion] 2026-02-04 19:06:51
>>Epitaq+WB
I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:

LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.

Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.

This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".

◧◩
20. librar+OJ[view] [source] [discussion] 2026-02-04 19:09:21
>>mrdepe+7p
I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.

And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.

Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.

And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.

replies(1): >>bigyab+D51
◧◩
21. esbran+RJ[view] [source] [discussion] 2026-02-04 19:09:33
>>throwa+5F
> Blocking access

> Asking to regulate hardware chips more

> partnerships with [the military-industrial complex]

> only labs doing good in that front are Chinese labs

That last one is a doozy.

22. Zambyt+YJ[view] [source] 2026-02-04 19:10:40
>>Johnny+(OP)
They are the only AI company more closed than OpenAI, which is quite a feat. Any "commitment" they make should only be interpreted as marketing until they rectify this. The only "good guys" in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.
◧◩◪◨
23. mhb+eK[view] [source] [discussion] 2026-02-04 19:11:45
>>cess11+nI
Sweden too. So there's that.
◧◩
24. mym199+MK[view] [source] [discussion] 2026-02-04 19:14:34
>>throwa+5F
The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.
replies(1): >>throwa+nN
◧◩
25. mirekr+0L[view] [source] [discussion] 2026-02-04 19:15:27
>>threet+yC
Anthropic settled with authors of stolen work for $1.5b, this case is closed, isn't it?
replies(1): >>riku_i+2u1
◧◩
26. yoyohe+9M[view] [source] [discussion] 2026-02-04 19:21:49
>>mrdepe+7p
At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.

That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.

replies(1): >>astran+sU1
◧◩◪
27. throwa+nN[view] [source] [discussion] 2026-02-04 19:28:28
>>mym199+MK
exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.

Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).

◧◩
28. Johnny+sN[view] [source] [discussion] 2026-02-04 19:28:51
>>mrdepe+7p
How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.
replies(6): >>advise+wR >>mrdepe+yW >>harith+h01 >>Comput+X41 >>bigyab+R51 >>aglusz+qz1
◧◩◪
29. advise+wR[view] [source] [discussion] 2026-02-04 19:50:20
>>Johnny+sN
Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.

Anthropic being a PBC probably helps.

replies(1): >>hungry+A91
◧◩◪
30. drawfl+ZS[view] [source] [discussion] 2026-02-04 19:58:12
>>mhb+bF
What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.
replies(2): >>mhb+Hk1 >>riku_i+vu1
31. fallou+8T[view] [source] 2026-02-04 19:58:42
>>Johnny+(OP)
>I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.

◧◩◪◨⬒
32. Jayaku+UU[view] [source] [discussion] 2026-02-04 20:06:51
>>Epitaq+zI
Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.
◧◩◪
33. mrdepe+yW[view] [source] [discussion] 2026-02-04 20:15:51
>>Johnny+sN
I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.

When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.

Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?

replies(2): >>thinkl+rl1 >>kviran+fu1
◧◩◪
34. harith+h01[view] [source] [discussion] 2026-02-04 20:31:03
>>Johnny+sN
I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.
replies(1): >>astran+gU1
◧◩◪
35. Comput+X41[view] [source] [discussion] 2026-02-04 20:48:37
>>Johnny+sN
People have values, Corporations do not.
◧◩◪
36. bigyab+D51[view] [source] [discussion] 2026-02-04 20:52:07
>>librar+OJ
> and even get into conflicts with governments over the issue.

To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.

[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

◧◩◪
37. bigyab+R51[view] [source] [discussion] 2026-02-04 20:52:55
>>Johnny+sN
No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.

If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*

replies(1): >>astran+6U1
◧◩
38. 2001zh+k71[view] [source] [discussion] 2026-02-04 20:59:22
>>cedws+ip
Anthropic's move of disallowing opencode is quite offputting to me because there really isn't a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.

Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.

In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.

In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.

The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.

I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.

◧◩◪
39. spacec+F81[view] [source] [discussion] 2026-02-04 21:06:23
>>mhb+bF
The US military is famous for purely acting in self defence...
◧◩◪◨
40. hungry+A91[view] [source] [discussion] 2026-02-04 21:10:41
>>advise+wR
>Companies, not begin sentient, don't have values, only their leaders/employees do

Isn't that a distinction without a difference? Every real world company has employees, and those people do have values (well, except the psychopaths).

replies(1): >>Lanzaa+8D1
◧◩
41. skerit+3a1[view] [source] [discussion] 2026-02-04 21:12:43
>>Jayaku+Zy
They don't even want people using OpenCode with their Max subscriptions (which OpenAI does allow, kind of)
◧◩◪◨
42. mhb+Hk1[view] [source] [discussion] 2026-02-04 22:06:55
>>drawfl+ZS
Two things can be true. The US pays for most of the defense of NATO.
◧◩◪◨
43. thinkl+rl1[view] [source] [discussion] 2026-02-04 22:10:26
>>mrdepe+yW
Ideally, ethical buyers would cause the market to line up behind ethical products. For that to be possible, we have to have choices available to us. Seems to me Anthropic is making such a choice available to see if buyers will line up behind it.
replies(1): >>fogzen+CK1
◧◩◪
44. wavemo+gq1[view] [source] [discussion] 2026-02-04 22:34:34
>>Epitaq+WB
The steelman argument is that super-intelligent AGI could allow any random person to build destructive technology, so companies on the path toward creating that ought to be very careful about alignment, safety and, indeed, access to weights.

The obvious assumed premise of this argument is that Anthropic are actually on the path toward creating super-intelligent AGI. Many people, including myself, are skeptical of this. (In fact I would go farther - in my opinion, cosplaying as though their AI is so intelligent that it's dangerous has become a marketing campaign for Anthropic, and their rhetoric around this topic should usually be taken with a grain of salt.)

◧◩◪
45. riku_i+2u1[view] [source] [discussion] 2026-02-04 22:57:09
>>mirekr+0L
Its not approved yet I think.
◧◩◪◨
46. kviran+fu1[view] [source] [discussion] 2026-02-04 22:58:35
>>mrdepe+yW
Wow. Well said.
◧◩◪◨
47. riku_i+vu1[view] [source] [discussion] 2026-02-04 22:59:39
>>drawfl+ZS
Sure, as well as other powers are actively aggressive during last N thousands years, that's how humans operate, who don't they extinct.
48. rainco+By1[view] [source] 2026-02-04 23:22:56
>>Johnny+(OP)
Google was the 'good guy.' Until it isn't.

Hell, OpenAI was the good guy.

replies(1): >>Jumpin+VM1
49. aglusz+Ty1[view] [source] 2026-02-04 23:24:38
>>Johnny+(OP)
In Poland, before the last presidential election, a member of one candidate’s campaign team had a moment of accidental honesty. Asked whether his candidate would pledge not to raise taxes after winning, he replied: “Well, what’s the harm in promising?”
◧◩◪
50. aglusz+qz1[view] [source] [discussion] 2026-02-04 23:27:25
>>Johnny+sN
> How do you parse the difference between marketing and having values?

You don't. Companies want people to think they have values. But companies are not people. Companies exist to earn money.

> That hasn't happen with Anthropic for me.

Yet.

◧◩
51. signat+Yz1[view] [source] [discussion] 2026-02-04 23:31:27
>>throwa+5F
No good companies for you, yet you bet on Chinese labs! Even if you have no moral problems at all with the China authoritarian, Chinese companies are as morally trustworthy as American ones. That is clear.

As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.

replies(1): >>nemoma+hA1
◧◩◪
52. nemoma+hA1[view] [source] [discussion] 2026-02-04 23:34:03
>>signat+Yz1
I expect to some degree the Chinese models don't need immediate profits, because having them as a show of capability for the state is already a goal met? They're probably getting some support from the state at least.
53. nilkn+sB1[view] [source] 2026-02-04 23:40:44
>>Johnny+(OP)
Anthropic was founded by OpenAI defectors who said OpenAI's product strategy was too dangerous and needed more safety research. But in reality Anthropic has almost exactly the same product strategy. A lot of this is just marketing to raise money to make the founders billionaires rather than the multi-millionaires they only would've been if they hadn't founded a competitor.
replies(1): >>astran+AU1
◧◩◪◨⬒
54. Lanzaa+8D1[view] [source] [discussion] 2026-02-04 23:53:55
>>hungry+A91
I think there are two key imperatives that lead to company "psychopathy".

The first imperative is a company must survive past its employees. A company is an explicit legal structure designed to survive past the initial people in the company. A company is _not_ the employees, it is what survives past the employees' employment.

The second imperative is the diffusion of responsibility. A company becomes the responsible party for actions taken, not individual employees. This is part of the reason we allow companies to survive past employees, because their obligations survive as well.

This leads to individual employees taking actions for the company against their own moral code for the good of the company.

See also The Corporation (2003 film) and Meditations On Moloch (2014)[0].

[0] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

◧◩
55. qudat+nD1[view] [source] [discussion] 2026-02-04 23:55:32
>>mrdepe+7p
Agreed. Companies don’t have the capacity to be moral entities. They are driven purely based on monetary incentives. They are mechanical machinery. People are anthropomorphizing values onto companies or being duped by marketing speak.
◧◩◪◨⬒
56. fogzen+CK1[view] [source] [discussion] 2026-02-05 00:46:48
>>thinkl+rl1
“Ideally” is doing a lot of heavy lifting here.
◧◩
57. Jumpin+VM1[view] [source] [discussion] 2026-02-05 01:03:34
>>rainco+By1
I can't see how Google turned to become evil or how OpenAI did for that matter.

Google delivered on their promise, and OpenAI well it's too soon but it's looking good.

The name OpenAI and its structure is a relic from a world where the sentiment was to be heavily preoccupied and concerned by the potential accidental release of an AGI.

Now that it's time for products the name and the structure are no longer serving the goal

◧◩◪◨
58. astran+6U1[view] [source] [discussion] 2026-02-05 02:04:02
>>bigyab+R51
Anthropic is a PBC, not a "company", and the people who work there basically all belong to AI safety as a religion. Being incredibly cynical is generally dumb, but it's especially dumb to apply "for profit company" incentives to something that isn't a traditional "for profit company".
◧◩◪◨
59. astran+gU1[view] [source] [discussion] 2026-02-05 02:05:36
>>harith+h01
Anthropic is a PBC. The shareholder goals are public benefit (PB) not "wealth maximization".

(Also, wealth maximization is a dumb goal and not how successful companies work. Cynicism is a bad strategy for being rich because it's too shortsighted.)

◧◩◪
60. astran+sU1[view] [source] [discussion] 2026-02-05 02:07:22
>>yoyohe+9M
That's probably not true - government regulators require a lot of privacy work and Android certainly complies with that. Legal compliance is a large business strategy because small companies can't afford to do it.
◧◩
61. astran+AU1[view] [source] [discussion] 2026-02-05 02:08:26
>>nilkn+sB1
Anthropic hasn't released image or video generation models. Seems pretty different to me.

Claude is somewhat sycophantic but nowhere near 4o levels. (or even Gemini 3 levels)

62. b3ing+m42[view] [source] 2026-02-05 03:33:17
>>Johnny+(OP)
Too late for that, they came out and said they will train on anything you type in there months ago
[go to top]