Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.

So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.

However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.



HN is pretty polarised about this - they are either “the good guys” or “doing it for positive marketing”.

I’m on the camp “the world is so fucked up, take the good when you can find it”.

Beggars can’t be choosers when it comes to taking a stand against dictatorships.


Yeah, the alternative is be OK with their product being used for surveillance.

Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.

FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.


> Let's call them out on that later. Right now let's applaud them for doing the right thing.

Yes, yes, yes. When I first read the stuff about this yesterday, my immediate thought was "wait, these are the only two things they have a problem with?"

But they made a stand, and that still matters. We shouldn't let the perfect be the enemy of the good. At least it's not Grok.


If one really wants to take a stand against this crazy administration, they shouldn’t start it by referring to Hegseth with his assumed title.


I thought that too, but then wondered if they thought better of deliberately antagonizing a very powerful bully.


> the alternative is be OK with their product being used for surveillance.

Their statement didn't indicate they object to their product being used for surveillance, just for domestic surveillance of U.S. citizens


It's gotta be thus.

For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?

That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.

Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.


No surprises many people on YCs site align with Sam Altmans view of the world - right or wrong.


I’m just surprised the alignment guy is struggling with alignment. Dodged a bullet I guess.


If I remember my D&D, Lawful Evil is an alignment.


I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.

Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.

In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.


Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.


How much value is there in individual values?

Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?

The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.


> However, in this instance, it does seem that Anthropic is walking away from money.

The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.


> ...the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.

I told my friend "You could just sign up for the free plan and upgrade after you try it out."

"No, I want to send them this tangible message of support right now!"


Still, you’d need a million people to do that to compensate the $200M military contract.


As an aside, there are probably lots of companies that serve the government seriously considering cutting the government as a customer.

Simply because the money/efficienct they will lose from cutting Claude will surpass the revenue they get from the gov


Does the military pay $200m per month?


As the parent stated, the Claude Pro plan is $200 per year, not per month.


Gotcha, mixed it up with the Max plan.


Is the government contract 200m per year? Or for a longer period?


Not all that many people


I don't think it's easy to compare how this might affect their bottom line.

Anthropic may gain customers, but OpenAI may lose customers also (or they may even gain customers).

Maybe OpenAI also has to pay their employees more now for "moral flexibility". Or maybe right-wing devs are more inclined to work there, I don't know.


I'm seeing a lot of "QuitGPT" posts. It seems your friend has friends.


I wouldn't be so sure about the courts overturning it. This is yet another opportunity for this administration to test its power. Even if the courts do, it'll be very time consuming and expensive.

Unfortunately this is really bad for Anthropic. Given how quickly the other providers jumped on the opportunity, you can tell how fast things move here and ultimately that could mean the difference between survival in this industry.

I hope something changes, but it can get a lot worse. Individual developers signing up won't help Anthropic. If things get worse, you can rule out Anthropic in most enterprise situations. Supply chain risk means you can't even build software with the thing. Forget about using AI as part of the product, as a user facing feature - people won't be able to build with it as it's part of the supply chain.


Unclear how much damage the designation will do to their dealmaking ability in the meantime. How long will it take for the court to reverse order?


The longer it takes, the better the impact on their reputation.


The consumer goodwill is working then - it pushed me to upgrade my plan on march 1st... (do they bill on rolling 30 day cycle ? or calendar-month to calendar-month?)


It’s not rolling 30 days. Lost 2 days of use by subscribing in February.


Thanks! I appreciate the heads up!


> The supply chain risk designation will be overturned in court,

I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.

OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.

And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.

> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.

I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.


> OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.

A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.


The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead


They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).


Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.

And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?


> Are you sure you know how they'll judge this case?

I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.

I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.


I call this being ethically convenient. I think anthropic is playing to the crowd. This admin will be gone soon enough so no need dragging the brand into mud. Just need to hold out. They have enough money that walking away from the money isnt impressive. But pissing off the gov is pretty fun and far more interesting.


That's what worries me so much about the development that OpenAI is stepping in. OpenAI's claim is that they have the same principles as Anthropic, but that claim is easy because it's free now because the govt wants to sell the "old bad, new good" story.

Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?


My reading is that OpenAI is paying lip service. Altman is basically saying "OF COURSE we don't want to spy on Americans or murderdrone randos, but OF COURSE the government would never do that, they just told me so (except for the fact that they just cut ties with Anthropic because Anthropic wouldn't let them do that)"


Its much simpler than that. OpenAI is losing significant market share and this is a Hail Mary that the government will forcr troves of companies to leavr Anthropic


principles are easy when they're free

Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: