lebovic a day ago

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

  • whstl 13 hours ago

    > Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

    After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

    And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

    • dust42 12 hours ago

      Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.

      • GorbachevyChase 10 hours ago

        I am suspicious the whole thing is a PR stunt to build public trust.

        • georgefrowny 9 hours ago

          In none of their statements do they say they won't do the things:

          > we cannot in good conscience accede to their request.

          That's very specifically worded to not say "under no circumstances will we do this".

          > Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

          Is not saying they won't eventually be included.

          They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

          • reactordev 9 hours ago

            This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.

            • uncletammy 7 hours ago

              That is speculation. You might be correct but this statement could simply be a strong signal to the administration to back down. A hail Mary.

              • abustamam 2 hours ago

                Isn't that what we're all doing in this thread? We could certainly take the document at face value but as a parent commenter said, almost every company starts off with "don't be evil" then goes and does evil things.

                Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.

              • 5o1ecist 4 hours ago

                > ... to back down.

                Or else what?

          • hdb2 8 hours ago

            > They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

            What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.

          • brookst 7 hours ago

            I mean that’s just adulthood.

            There are outcomes where the US government seizes the company. Not super likely, not impossible.

            It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.

            I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.

            • zhengyi13 4 hours ago

              > There are outcomes where the US government seizes the company. Not super likely, not impossible.

              Are there historical examples in the US specifically where we've nationalized a business?

              Because we've certainly invaded countries and assassinated leaders over exactly the same.

              ETA: I could have answered my own question with two minutes of research. Yes, we have: https://thenextsystem.org/history-of-nationalization-in-the-...

            • 5o1ecist 4 hours ago

              I'm not sure why you are getting downvoted.

              It is indeed a naive, or more likely a dishonest thing to do.

              Anyone can promise anything. When there's little to no accountability and public memory/opinion doesn't last a week (or is easily manipulated anyway), then promises mean literally nothing. Very like how, in politics, temporary means permanent.

              Or HackerNews itself, with them implementing a little Big Brother. It will, of course, absolutely and without a doubt only "nudge" people and it will absolutely, under no circumtances, pinky promise, never get any worse or do anything else but that.

              When there's millions of fools, then those, who actually recognize that they are being fooled, are rarely ever significant in numbers. They're drowned out by the fools, until said fools "wake up" and cry "if only we had known!".

              Well ... you could have known, but in your mindlessness you didn't listen and think.

              "It must be true, because they say so. D'uh. What are you, dumb?"

          • darkwater 9 hours ago

            This. I don't get why you are getting downvoted. The statement literally says:

              Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
            
            Last word is very important: "now".
            • ascorbic 6 hours ago

              I'm not saying whether or not they're planning to back down, but this sentence doesn't imply that. The "now" is clearly meant to be in reference to the fact they've not in the past.

            • ToucanLoucan 8 hours ago

              Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.

              I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.

        • absoluteunit1 9 hours ago

          I share this sentiment.

          In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.

          • ternwer 22 minutes ago

            Feel free to judge them by their actions rather than intentions. This situation being an example.

        • Beestie 9 hours ago

          I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.

          • mingus88 9 hours ago

            This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade

            Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.

            We are two elections and a major health issue away from a complete change of course.

            But short sightedness is the name of the quarterly reporting game, so who knows.

            • travisgriggs 6 hours ago

              > We are so early into this AI revolution…

              I keep hoping it’s almost over.

              Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.

              I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.

              With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.

              Perhaps it’s just my age.

            • HumblyTossed 8 hours ago

              I'm seriously worried there won't be more elections. Not hyperbole at all.

              • palmotea 7 hours ago

                > I'm seriously worried there won't be more elections. Not hyperbole at all.

                Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.

                If you want something to worry about, worry about this:

                > And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.

                > Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.

                > And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)

                • filoeleven 4 hours ago

                  It's not an unrealistic fear. Trump has been making noises about "taking over elections." Abolishing elections wholesale is very unlikely, sure, but a sham election rigged by a corrupt government? That's standard fare for authoritarians. And there's evidence of voting anomalies in swing states in the 2024 election.

                  https://www.theguardian.com/us-news/2026/feb/27/trump-voting...

                  https://electiontruthalliance.org/

                  • abustamam 2 hours ago

                    Yeah, Russia still has "elections" for all the good that does them.

                  • HoratioHellpop 2 hours ago

                    Trump _says_ lots. Most of it doesn't come true.

                    • palmotea 5 minutes ago

                      FYI, even though you have a new account, you were banned from your first comment and all your comments automatically show up as hidden-by-default to most users.

                • bostik 3 hours ago

                  It's not who votes that counts, but who counts the votes.

                  (Attributed to Stalin, but likely comes from a despot earlier in the history.)

              • panarky 7 hours ago

                Authoritarian nations continue to have elections, turnout is near 100%, and Dear Leader wins with 90% of the vote.

              • delecti 8 hours ago

                I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.

                • ckemere 7 hours ago

                  WaPo headline “Administration plans to declare emergency to federalize election rules.” https://www.washingtonpost.com/politics/2026/02/26/trump-ele...

                  • delecti 7 hours ago

                    Yeah, they can plan whatever they want. No such authority exists, and it must really be emphasized that they're all so stupid.

                    • abustamam 2 hours ago

                      Stupid and effective are not mutually exclusive.

                      I do agree with you that no such authority exists, but this administration seems to get away with a lot of things they have no authority to do.

                • Loudergood 6 hours ago

                  If you think they're pissed now, just wait to see how they react to election interference.

                  I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.

              • conception 8 hours ago

                Putin crushes every election he has. Of course there would be more elections.

            • re666 8 hours ago

              Mattis- the same highly decorated and respected leader that was on the board of directors at Theranos... edit: added Mattis

          • tdeck 3 hours ago

            This is why we should be skeptical of companies that want to tie themselves to the military industrial complex in the first place.

          • GorbachevyChase 7 hours ago

            Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.

            It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.

          • Matticus_Rex 6 hours ago

            $200M is >2% ARR at the last numbers we got from them, and would take them back... checks notes... literally only a few days of ARR growth.

          • wartywhoa23 9 hours ago

            I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.

          • jrs235 8 hours ago

            The rest of the world moves to using you?

        • HardCodedBias 7 hours ago

          It absolutely is a PR stunt. And the media is cheering.

          It's absurd.

          It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.

          They are explicitly not doing that.

          • ternwer 27 minutes ago

            This effectively is cancelling, isn't it?

            You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.

          • FabHK 6 hours ago

            > If you do not like working with the military, ...

            Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?

            They want to work with the military, with just two additional guardrails.

      • heresie-dabord 11 hours ago

        > it is simply the laws of money

        The First Law of Money: Money buys the Law.

        • ohbleek 9 hours ago

          To quote Brennan Lee Mulligan, "Laws are threats made by the dominant socioeconomic ethnic group in a given nation."

          • LordDragonfang 4 hours ago

            The full[1] quote is:

            > “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”

            ...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”

            [1] Minus the last line, which I will allow others to discover for themselves

          • philipallstar 6 hours ago

            Certainly pre-democracy, other than the ethnic group bit.

        • avmich 10 hours ago

          That's maybe the second law. The first one is: money is always finite.

          Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.

          • antonvs 8 hours ago

            The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.

            If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.

      • qdotme 12 hours ago

        FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.

        At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.

        • dust42 11 hours ago

          I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...

          • qdotme 2 hours ago

            Not at all. Especially at that level of capital. It’s the equity equivalent of „if you owe a bank a million dollars, you’re in trouble. If you owe a bank a billion dollars, the bank is in trouble”.

            Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.

        • nradov 7 hours ago

          Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.

      • Lutger 7 hours ago

        Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.

    • vladms 7 hours ago

      > everyone in this industry

      So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

      I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

      There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.

      • whstl 7 hours ago

        Everyone in this industry making a certain bullshit claim. I did qualify my statement. Don’t cut my words to make a strawman.

        Additionally I state in the end that I do believe it’s possible.

        • vladms 6 hours ago

          So do you know everyone in the industry that made that such a claim? Sure, maybe you meant to restrict it further to "everyone I have noticed personally that said/wrote that" (or anything along the lines), but even then, do you know all the stuff that they did after saying it? (as the statement also included "doing the opposite" which I find quite strong).

          If I see "everyone" I would expect it to actually mean "everyone under the constraints", the word "everyone" has a certain meaning and is very powerful, why use for situations where other words like "many", "most" might be more appropriate?

    • amunozo 13 hours ago

      I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.

      • plufz 13 hours ago

        But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.

        • Peritract 11 hours ago

          I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.

        • moozooh 11 hours ago

          Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.

          Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.

          • detourdog 9 hours ago

            Ideals have always been represented in literature as a virtue and a problem for humans. I find real life is no different.

        • ben_w 12 hours ago

          Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.

        • hsuduebc2 12 hours ago

          I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.

          Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.

          • Ajedi32 7 hours ago

            Yeah it's not that the people involved have no ideals, it's that the company structure as a whole doesn't, and over time that structure will eventually outlive, corrupt, and/or overpower the ideals of the founders or other principled individuals at the company.

      • hsuduebc2 12 hours ago

        I can’t think of a single thing Meta does that isn’t driven by pure greed.

        • ben_w 12 hours ago

          Yes, though Meta is a bad example as they started off with the values of Zuckerberg, and still have them.

          • endofreach 12 hours ago

            Exactly right. But i think it makes it a good example actually. Company DNA is a thing. Bill Gates isn't running microsoft anymore. Still...

          • hsuduebc2 12 hours ago

            What would be more appropriate example?

            • ben_w 11 hours ago

              Apple, Tesla, Oculus.

              The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.

              Adobe?

              • hamasho 8 hours ago

                But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.

                • ben_w 6 hours ago

                  I'm not denying 95% of that, only saying that Zuckerberg didn't have any idealism to lose in the first place.

                  • hsuduebc2 3 hours ago

                    I actually forgot that his first site was facemash which single purpose was to rate "hotness" of each individual girl on his University.

                • freejazz 3 hours ago

                  Anthropic is not a public company.

              • shafyy 8 hours ago

                LOL, Palmer Luckey is a right-wing war mongering psychopath.

        • mikkupikku 8 hours ago

          All of Meta's VR stuff should rationally be cut loose and refunded if it were all about greed. That stuff only survives because Zuck is a nerd who wants it to happen (but it's not going to.)

        • amunozo 9 hours ago

          Oh sure. I don't want to say everybody are driven by ideals and not greed, but that even people with strong ideals and good intentions can do a lot of bad by being blinded by those same ideals.

    • mcv 7 hours ago

      Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.

    • OtherShrezzing 13 hours ago

      I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.

      • qdotme 12 hours ago

        Depends. Many still reflect the founders vision; even if that vision might have evolved over time.

        • AndrewKemendo 8 hours ago

          Can you provide an example of that for an American venture backed corporation older than a decade?

          • achenet 5 hours ago

            Not the person you're replying to, and I may be wrong about this, but Amazon?

            Jeff's original vision was "relentless customer focus" and ...

            actually on second thought I'm seeing the argument 'Amazon stopped caring about customers and is in full enshittification mode at this point'.

            But maybe Amazon circa ~2010/2015, or Google around 2010 was still pretty close to the original vision of customer service/organizing the world's information.

            Or Apple? They're still making nice computers, although not sure they count as VC backed.

            Stripe perhaps? Hashicorp?

            • AndrewKemendo 3 hours ago

              Well Google‘s vision was to catalog all the world’s data

              Apple wanted to make personal computing stable - they were absolutely VC backed

              I suppose the original question is vague enough that it could always encompass everything which is founders vision even if the vision changes so it’s like OK well then then there’s nothing really to say that you’re stable too it’s just some whatever the function of the person who started the organization is and even that you could debate

      • whstl 11 hours ago

        True. Which is all the more reason for calling bullshit on claims of "doing good" or "having ideals" by anyone building a company that can eventually be ran my MBAs.

      • j45 10 hours ago

        The impact of MBAs might be decreasing..

    • 5o1ecist 4 hours ago

      > not related to people's "understanding".

      Except for the understanding that it's foolish to believe anything that sounds too good to be true. Yes, believing that people who want to make money/achieve positions of power, also want to make the world a better place, is absolutely foolish. Ridiculously foolish.

    • Aperocky 9 hours ago

      Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.

    • tyingq 8 hours ago

      I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

      I understand Anthropic is not public, but I assume there's an IPO coming.

    • jug 3 hours ago

      This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.

    • lebovic 5 hours ago

      I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.

      I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.

    • personjerry 4 hours ago

      At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

      i.e. Fiduciary Duty Considered Harmful

    • wartywhoa23 11 hours ago

      Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.

    • puppymaster 10 hours ago

      and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.

    • tristor 3 hours ago

      > Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

      To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).

  • lm28469 15 hours ago

    Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"

    • UqWBcuFx6NV4r 15 hours ago

      > the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

      This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

      VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.

    • dudefeliciano 14 hours ago

      Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.

    • dudefeliciano 14 hours ago

      Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.

    • skyberrys 14 hours ago

      Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.

      • lm28469 14 hours ago

        Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?

        • tehryanx 12 hours ago

          where is anthropic hyping like that? Most of what I see coming out of anthropic is deep context releases on research they're doing.

          • lm28469 11 hours ago

            > Mar 14, 2025, 7:27 AM CET

            > "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

            It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...

            Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it

            How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?

            • brookst 7 hours ago

              Your quote supports hype but does not support your claim that Anthropic is telling customers they need more money to deliver the hype.

              Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.

              • goatlover 2 minutes ago

                Elizabeth Holmes is serving time for promising investors something her company couldn't deliver, so there is a line beyond which hype becomes fraud. Probably AGI, ASI, and fully automated societies aren't something well enough defined for courts to rule on, unlike making unfounded medical diagnoses from a pinprick of blood.

            • ToValueFunfetti 9 hours ago

              I work at a non-tech Fortune 500 and this is looking nearly spot-on from here. Nobody on my team touches the code directly anymore as of about 2 months ago. They're rolling it out to the entire software department by June. I can't speak to the economy at large, but this doesn't look like baseless hype to me. My understanding is that Claude Code reached this level late last year, ie. Amodei was just wrong about uptake rates.

    • District5524 13 hours ago

      They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...

      • Keyframe 13 hours ago

        Amodei believed Altman, so there's that. I don't (have to) believe either. If product works for me, it works. Raising their clanker products to second coming is for investor relations, of which I am proud to day I am not.

      • viking123 11 hours ago

        Both are hucksters, although Amodei's qualifications are pretty good, he actually is a scientist. Out of these I think Hassabis is my favorite

      • ori_b 12 hours ago

        I don't know why anyone would trust any of the above.

    • kseniamorph 10 hours ago

      disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI

    • rhubarbtree 15 hours ago

      There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.

      • marxisttemp 14 hours ago

        Corporations do not and cannot have principles, they only have the profit motive

        • parasubvert 14 hours ago

          This is false. People can have principles, profit motive is not something a corporation has, it's something people have. Corporations do things all the time that are based on everything from principles, to the personal whim of executives, to exercise in ego, to community benefiting actions, or to screw customers for extra profit. It is entirely dependent on the specific people in management roles.

          Corporations need profit to survive because the cost of tomorrow is a surplus of today.

          • anon_e-moose 12 hours ago

            A corporation is a bunch of people cooperating to achieve a common goal.

            There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.

            Is that corporation publicly quoted in the stock market or is it private?

            Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.

            Both need profit to survive, but the publicly quoted company is much more extreme.

            When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.

            The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.

            • vladms 7 hours ago

              Agree with everything, but would add a small detail : publicly quoted corporations might as well sell dreams and if they are very good at doing that have no profit because of some future potential pay off (of course I am writing this from my fully self driving car that I own since 10 years ago, that might transform in a robot soon).

          • moozooh 11 hours ago

            Sadly, market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company but the company itself exists for multiplying value. The profit motive is built into the reason for its existence. It's a contradiction that has a lower probability of resolving in favor of morals as the company grows in size and accrued capital. Whichever moral principles the leadership may have had at the beginning, they always erode or get perverted over time simply because the market always has a stronger pull.

            I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.

            • vladms 7 hours ago

              I find "morals" difficult to evaluate objectively. Some people might find it "moral" that women do not have any education and just stay at home, which I find terrible.

              But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).

              • moozooh 5 hours ago

                It may appear simpler on the surface but it's very easy to find that market forces that don't have any checks and balances on them eventually converge on increasingly aggressive and dehumanizing behavior—not unlike your example with women. I have many such well-documented behaviors to list as examples, and I guarantee you have encountered them regularly and been upset at them.

                The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.

                Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.

          • marxisttemp 7 hours ago

            something something the ideology of a cancer cell. The only goal of a publicly traded corporation is to make the line go up, and the board is required to eliminate anyone who puts other principles before that.

            • FabHK 6 hours ago

              Tim Cook memorably said (in 2014): "When we work on making our devices accessible by the blind, I don't consider the bloody ROI."

              How come the board hasn't eliminated him?

    • jama211 14 hours ago

      Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.

      • lm28469 14 hours ago

        This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matter as much as private companies PR campaigns

        • jama211 4 hours ago

          Sure they do buddy.

  • heresie-dabord 11 hours ago

    > how driven by ideals many folks at $Corporatron are

    Well let's see... it says in the post:

        * worked proactively to deploy our models to the Department of War and the intelligence community. 
    
        * the first frontier AI company to deploy our models in the US government’s classified networks, 
    
        * the first to deploy them at the National Laboratories, and 
    
        * the first to provide custom models for national security customers. 
    
        * extensively deployed across the Department of War and other national security agencies
    
        * offered to work directly with the Department of War on R&D to improve the reliability of these systems
    
        * accelerating the adoption and use of our models within our armed forces to date.
    
        * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
    • wrsh07 8 hours ago

      They didn't claim to have pacifist ideals

      In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

      Just because you disagree with their ideals doesn't mean they're not holding to theirs

      • leshow 7 hours ago

        They sound exactly like George Bush and every other American leader who's claimed high minded ideals while they engage in interventions in direct contradiction to those ideals around the world

        • wrsh07 3 hours ago

          To be clear, I don't think anthropic is itself intervening.

          The concerns they've raised about authoritarianism is "AI enabling authoritarians."

          When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.

          I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy

    • mikkupikku 8 hours ago

      Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.

  • neom 21 hours ago

    I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.

    • bobsomers 18 hours ago

      > I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.

      This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.

      • neom 17 hours ago

        I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.

        • RHSman2 15 hours ago

          That is a good LinkedIn endorsement of ever I saw one!

      • klodolph 17 hours ago

        Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.

        • inigyou 14 hours ago

          If someone is in an environment where they have to do XYZ or die, their choice to do XYZ might not reflect their personality, but the environment where they have to do XYZ or die.

        • vintermann 14 hours ago

          But if you were watching them, was there really no freedom from consequences? At least there was the risk of you thinking less of them.

          I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.

          • klodolph 13 hours ago

            > But if you were watching them, was there really no freedom from consequences?

            Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.

            You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.

            Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.

        • wahnfrieden 15 hours ago

          Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.

    • bahmboo 17 hours ago

      Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!

      • neom 17 hours ago

        Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QC ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.

      • lebovic 17 hours ago

        Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.

        They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.

        I think neom is referring to Jack Clark, another one of the seven cofounders.

      • arduanika 17 hours ago

        I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.

        FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.

        For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!

        Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.

      • kunai 17 hours ago

        [flagged]

        • dang 16 hours ago

          Please don't do this here.

    • taurath 19 hours ago

      > it's easy to know how they will act when the going gets rough

      Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.

      That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.

      • coffeemug 16 hours ago

        Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".

      • vintermann 14 hours ago

        In these days of the Epstein mails, it's worth remembering one thing that's become clear: Epstein was an extremely nice guy. He seemed kind, sincere, interested in what you were doing, civilized etc.

        But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.

        That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.

        • convivialdingo 6 hours ago

          Anyone who's grown up around the upper class social strata understands this to be true.

        • sp00chy 13 hours ago

          In German you say „Nett ist die kleine Schwester von Scheisse“ which means „Nice is the polite version of being an asshole“. And this is how I cope with what decision-makers say. Zuckerberg was also „nice“ for a long time.

      • michaelhoney 18 hours ago

        "people's traits flanderize": nice

      • rl3 19 hours ago

        >Even if you went to burning man and your souls bonded ...

        I'll take: List of places I never want to bond my soul with someone at for one thousand, please.

        • taurath 19 hours ago

          They get an air conditioned trailer and pay "sherpas" to do their chores, so its basically just a hotel suite

        • tummler 18 hours ago

          Oh, that's the best place for souls to bond.

          • webnrrd2k 16 hours ago

            Bond to what -- that's the real question

            • shawn_w 13 hours ago

              Playa dust. It's certainly permanently bonded to my car.

    • ajyey 19 hours ago

      This is insanely naive

      • parasubvert 14 hours ago

        Cynicism isn't always correct.

    • skeptic_ai 19 hours ago

      [flagged]

      • Vaslo 19 hours ago

        Huh? Why would they be in prison??

        • skeptic_ai 17 hours ago

          > they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries

          They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.

          • Vaslo 7 hours ago

            This is silly. No one at anthropic is going to prison for this. It only hurts their ability to do business with US government customers which is a net negative for all. Anthropic will come around.

    • noduerme 16 hours ago

      The nature of evil is that it's straight down the road paved with good intentions.

  • mondrian 11 minutes ago

    Late comment but I think this is probably a naive business strategy for an American company. Amodei seems to underestimate how much the US economy operates on relationships, connections and reputation. Granted this admin is really aggressive, but if Anthropic is marked a supply chain risk, they're screwed because virtually every US enterprise is a downstream contractor. And in lieu of B2B and government, they lack a direct-to-consumer moat. I commend his apparent assumption that the US market competes on capabilities (also betrayed by his predictions that AI will quickly destroy the white-collar class) but the reality is less an open free market and more a complex web of entrenched relationships. And going back to his prediction that AI will destroy the white collar class, this is where the bulk of inter- and intra-entity relationships live. In an economy driven by relationship moats, why would a CEO sever his relationships in exchange for a better tool?

  • imjonse 16 hours ago

    > I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,

    I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.

    They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

    • versteegen 14 hours ago

      > They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

      Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

      https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

      > I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.

      > What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.

      • MichaelDickens 4 hours ago

        > Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

        Can you imagine a world where Anthropic says "we are changing our RSP; we think this increases AI risk, but we want to make more money"?

        The fact that they claim the new RSP reduces risk gives us approximately zero evidence that the new RSP reduces risk.

    • nla 9 hours ago

      Yea, that Sam only does this because, "he loves it." They're not in it for the money.

      • lebovic 5 hours ago

        Sorry, I meant a different Sam – Sam McCandlish, not Sam Altman.

        Wasn't expecting this post to get so much attention.

      • Aperocky 9 hours ago

        That's not fair, Sam can love money too and there is no conflict here.

  • drawfloat 12 hours ago

    "Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.

  • yunnpp 20 hours ago

    It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

    I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.

    And in any case, this is difficult territory to navigate. I would not want to be in your spot.

    • eternauta3k 12 hours ago

      Come On, Obviously The Purpose Of A System Is Not What It Does

      https://www.astralcodexten.com/p/come-on-obviously-the-purpo...

      • Peritract 11 hours ago

        I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.

        • sebzim4500 10 hours ago

          I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?

          • Peritract 8 hours ago

            > most invocations of the statement are either blindingly obvious or probably false

            So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.

            Secondly, look at this one specifically:

            > The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.

            Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.

            Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.

            The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.

            I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.

            [1] https://xkcd.com/169/

  • i_love_retros 10 hours ago

    Driven by ideals? Yeah right. That first paragraph he says they work with the department of defense to protect us from authoritarianism. What?! You are working with an authoritarian regime you cynical fuck. Getting paid by them. And now you act all virtuous because you won't make autonomous weapons.

  • GardenLetter27 13 hours ago

    Anthropic doesn't want us to have the right to run open weight models on our own computers. They were never the good guys.

    • u1hcw9nx 13 hours ago

      What I read is: Anything not open source, open weight, is evil.

      I disagree. The concept of nuance, putting things in context, is the source of all good in internet discussions.

      • GardenLetter27 12 hours ago

        No, but lobbying the government to prohibit open source / open weight models is evil.

        They literally want to use state violence to control what we can do on our own computers.

        • FabHK 6 hours ago

          Anytime there is any law about anything you can say that it's ultimately backed "using state violence". That's just silly. As silly as the notion that there shouldn't be any rules and limits whatsoever about what you can do with your computer.

          • ImPostingOnHN 4 hours ago

            > As silly as the notion that there shouldn't be any rules and limits whatsoever about what you can do with your computer.

            Hard disagree. There shouldn't be any rules or limits whatsoever about what I can do with my computer, and especially ON my computer, as long as the thing I'm doing doesn't break other laws (CFAA, CSAM, etc).

            This is, after all, Hacker News.

  • ozgung 13 hours ago

    The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.

  • snickerbockers 20 hours ago

    >I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

    I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.

    • jsnell 18 hours ago

      Where are you getting that from?

      The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

      > Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

      It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.

      • snickerbockers 16 hours ago

        I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.

        >A pretty clear indication that the current language has some.

        Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.

    • zaptheimpaler 15 hours ago

      This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.

      • oceanplexian 6 hours ago

        > US’ unlawful kidnapping of Maduro.

        The what now?

        Maduro is being prosecuted and there was a warrant out for his arrest. There is no magic soil exemption if you commit a crime against the United States and flee to another country.

      • snickerbockers 12 hours ago

        What on earth does "Two such use cases have never been included in our contracts with the Department of War" mean? Did they specifically forbid it in the contract or was it literally just not included? Because I can tell you that if it's the latter that does not generally entitle them to add extra conditions to the sale ex post facto.

        >threatening them because how dare a company tell the psycho dictators what to do.

        Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.

        • SpicyLemonZest 8 hours ago

          They have always maintained an acceptable use policy forbidding these things. It was not controversial, because the Pentagon claims they have no interest in doing them in the first place, until a regime-aligned executive at Palantir decided to curry favor by provoking a conflict.

  • cue_the_strings 9 hours ago

    Don't attribute to ideals what is simple self-preservation.

    No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.

  • FeloniousHam 7 hours ago

    > Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.

    Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."

  • bambax 16 hours ago

    This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

    What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?

    • roughly 15 hours ago

      > What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

      Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.

      • rhubarbtree 15 hours ago

        More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.

        • alfiedotwtf 14 hours ago

          This administration^Wregime has a lot of experience pressuring publicly with high stakes followed up by making backroom deals that would even make Jared Kutcher blush.

          This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.

    • D_Alex 16 hours ago

      I'm a bit underwhelmed tbh. Here is Anthropic's motto:

      "At Anthropic, we build AI to serve humanity’s long-term well-being."

      Why does Anthropic even deal with the Department of @#$%ing WAR?

      And what does Amodei mean by "defeat" in his first paragraph?

      • jazzyjackson 16 hours ago

        DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being

        • temp8830 15 hours ago

          It is all for the benefit of man. We even get to see the man himself daily on television.

        • mapt 15 hours ago

          Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.

        • viking123 10 hours ago

          I think the last few months have shown pretty clearly in whose service this policy is. If China went to attack Taiwan, west has no moral high ground left.

        • Balinares 14 hours ago

          One of the hallmarks of fascist thinking is the dehumanizing of opponents and minorities, so within their own messed up framework, they might even mean it.

      • parasubvert 14 hours ago

        There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.

        • gambiting 14 hours ago

          Look I'm not going to disagree, obviously - but even in those times, you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of. Bombing of Hiroshima and Nagasaki is still widely discussed today for a myriad of reasons, as is conventional bombing of cities in both Nazi Germany and Japan. We can both agree that fighting nazis is a good thing while at the same time have a moral objection to participating in the war effort.

          And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).

          Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.

          • HoratioHellpop 2 hours ago

            //but even in those times, you could argue

            This is the oft-spoken fallacy of the benefit of hindsight. Folks in that situation 80 years ago did what they had to do, to stop Japan from continuing to rape and murder hundreds of thousands of people in southeast Asia. But of course, you would have found a better option. How's the view, standing on the shoulders of giants?

      • moozooh 10 hours ago

        Look up when Anthropic signed a contract with Palantir and then look up what Palantir does if you want an even better reality check on following the ideals. I chuckle every time.

        And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.

        Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.

    • Synthpixel 16 hours ago

      Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

      Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.

      • bambax 14 hours ago

        But how can they avoid it, why are they not asked?

    • tpm 13 hours ago

      Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.

  • zer0gravity 7 hours ago

    The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.

  • Yizahi 13 hours ago

    Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?

    Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.

  • bertylicious 14 hours ago

    "They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.

    So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.

    And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?

    • lebovic 14 hours ago

      Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.

      I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.

      Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.

      That doesn't guarantee a good outcome, and there's still a hard road ahead.

    • jghn 7 hours ago

      > to rename the DoD to "department of war"

      The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.

    • marxisttemp 14 hours ago

      Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter

    • viking123 10 hours ago

      > And who exactly are these "autocratic adversaries" they are mentioning?

      Anyone that Israel doesn't like

    • DeepSeaTortoise 13 hours ago

      > Except for the victims of sexual abuse perpetrated by their clergy.

      I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.

      But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.

      The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.

      Almost 3 decades later he got railroaded in court, me learning about it in the news.

      • bertylicious 11 hours ago

        I'm confused. You heard about someone you knew being wrongfully convicted of a crime he didn't commit and you could have provided the testimony to clear him, but you just decided not to? Why not?

        • DeepSeaTortoise 6 hours ago

          I never was contacted during the trial and only read about it almost 2 years later in the news.

          Also, he's a man of strong faith, not that he knows he'll win in the end, but more like that it just doesn't have the same importance for him as it would have for us. I only had a short opportunity to ask him about it since then and basically he doesn't think there is just about any chance to win this, what he's most worried about is ruining the public image of his students (including his accusers) and since his order allowed him to rejoin and start over, in practice, he got all he wanted to ask for already.

  • comandillos 14 hours ago

    To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.

    • lonelyasacloud 13 hours ago

      >just another marketing stunt

      What evidence on _Amodei_ and his actions leads to that conclusion?

      • moozooh 10 hours ago

        Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

        When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.

        • lonelyasacloud 4 hours ago

          >Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir.surveillance of Americans but they happily deal with Palantir.

          Palantr will also be subject to the same contractual limitations as the DoD.

          >They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.

          The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.

          While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

          > They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

          What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

          Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.

          • moozooh 3 hours ago

            > Palantr will also be subject to the same contractual limitations as the DoD.

            Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).

            > Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

            If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.

            > What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

            If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.

            > Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.

            These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.

        • ExoticPearTree 10 hours ago

          You know, once the lawyers get involved, there are no contradictions because they define every term and then it makes all the sense in the world.

          If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.

          • moozooh 9 hours ago

            You call it silly, I call it an accurate reading!

  • yayr 13 hours ago

    There are well intentioned people everywhere, also at Google or OpenAI...

    https://notdivided.org

    But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...

  • andoando 7 hours ago

    The world running on a few powerful mens ideals is a problem in itself.

  • didip 7 hours ago

    I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”

  • synergy20 9 hours ago

    just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.

  • dpweb 18 hours ago

    I wouldn't underestimate this as a good business decision either.

    When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.

  • nmfisher 18 hours ago

    As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.

    I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?

    • themacguffinman 18 hours ago

      Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

      • pjc50 13 hours ago

        > was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

        I think in this case it's safe to assume malice rather than incompetence. It's a lot like the parable of the frog and the scorpion.

      • jazzyjackson 16 hours ago

        Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD

        • themacguffinman 15 hours ago

          The keyword is "cancel", not threaten seizure with the DPA and destruction with a baseless supply chain risk designation.

      • baq 16 hours ago

        If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.

        • ben_w 11 hours ago

          The US government clearly doesn't take seriously the claim that AI is more dangerous than (or even as dangerous as) nukes, because if they did they wouldn't allow anyone except the military to develop or use them, they wouldn't allow their export or for them to be made available for use by foreigners like me, they wouldn't allow their own civilians to use them, they would probably be having a repeat of the cases in the cold war where they tried to argue certain inventions were "born secret" and could not be published even if they were developed by people who were not sworn to secrecy.

    • sebzim4500 10 hours ago

      I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.

  • windexh8er 9 hours ago

    > I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

    This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...

  • psychoslave 7 hours ago

    People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.

    Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.

    Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.

  • PeterStuer 14 hours ago

    As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?

  • jwlarocque 18 hours ago

    Oh hey Noah

    Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).

  • fergie 13 hours ago

    > I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.

    Sure, but what happens when the suits eventually take over? (see Google)

  • MichaelZuo a day ago

    How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?

    It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.

    • sowbug a day ago

      Saying an entity has values doesn't mean the entity agrees with every single one of your values.

      • MichaelZuo 21 hours ago

        The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.

        • sowbug 20 hours ago

          That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.

          (I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)

        • ChrisMarshallNY 20 hours ago

          Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.

          • zmgsabst 20 hours ago

            Okay — but if Anthropic is typical banal evil in that regard, why should we believe they didn’t also compromise in other areas?

            The exact point is that Anthropic is unexceptional and the same as other corporations.

  • learingsci 7 hours ago

    I remember when people said the exact same thing about Google. Youth is wasted on the young.

  • whatever1 20 hours ago

    Let us think how OpenAI responded to this.

  • amunozo 15 hours ago

    I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).

    • Aeolun 15 hours ago

      It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.

      • amunozo 13 hours ago

        I've never seen any other democracy use so extensively the kind of duality between the good guys and bad guys, as Americans like to say. There is a total lack of nuance and a very widespread message about how the US is special and best than anything else in the world, so everything is justified to assure its primacy. It's the kind of thing you hear from totalitarian and brainwashed countries.

        I know this is not everybody in the US, and I say this as a foreign person that observes things from outside. I agree with the two statements you made, I just think they could be incomplete and that the countries that behave most similarly to the US are not democracies.

      • gylterud 14 hours ago

        Countries do not do, things people do.

        Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.

      • moozooh 10 hours ago

        This argument is in poor faith. First of all, a contradiction between your own stated values and your own actions cannot be excused by the status quo; it's on you to resolve it. Second, that's a very bold claim that is broad and cynical enough to make it easy to use it as an excuse for anything heinous.

  • protocolture 17 hours ago

    >I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

    Their "Values":

    >We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

    Read: They are cool with whatever.

    >We support the use of AI for lawful foreign intelligence and counterintelligence missions.

    Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

    >Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

    Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

    Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.

    • HDThoreaun 17 hours ago

      Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

      • protocolture 16 hours ago

        Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.

        >AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

        Humanity includes the future victim of AI weapons.

        • HDThoreaun 15 hours ago

          Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.

          > Humanity includes the future victim of AI weapons.

          Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational

          • marxisttemp 14 hours ago

            The DoD is likely and in fact has many times massacred people

            • ExoticPearTree 10 hours ago

              Yo do know that this what the militaries do, right?

              • marxisttemp 7 hours ago

                Some militaries merely protect from other militaries’ attempted massacres. Massacres are certainly what the US military does. I sure hope you don’t support the US military knowing that.

      • orbital-decay 8 hours ago

        >Geopolitically they could care less.

        I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.

        >align them with humanity.

        Quick sanity check: does their version of humanity include e.g. North Koreans?

      • ExoticPearTree 10 hours ago

        > AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

        This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?

      • vasco 16 hours ago

        There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.

        • orbital-decay 8 hours ago

          Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.

          • vasco 6 hours ago

            And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.

            The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.

      • marxisttemp 14 hours ago

        I think you mean “couldn’t care less”. “Could care less” implies they care.

  • tpoacher 13 hours ago

    > But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

    in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.

    Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".

    But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.

    • xvector 13 hours ago

      Shareholders do not control Anthropic's board, it is not structured like a typical corporation.

  • SecretDreams 18 hours ago

    > Many groups that are driven by ideals have still committed horrible acts.

    Sometimes, it's even a very odd prerequisite.

  • toddmorrow 7 hours ago

    you're suffering from Stockholm syndrome

  • yowayb a day ago

    I've thought the same about a few of my founders/executives.

    "You either die the good guy or live long enough to become the bad guy"

    The "bad guy" actually learns that their former good guy mentality was too simplistic.

    • JohnMakin a day ago

      I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.

    • Fricken a day ago

      Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.

      • nurbl 13 hours ago

        Yes. There will always be people who see opportunity in using it destructively. Best case scenario is that others will use it to counter that. But it is usually easier to destroy than to protect. So we could have a constant AI war going on somewhere in the clouds, occasionally leaking new disasters into the human world.

        • Fricken 2 hours ago

          I keep hearing this word "progress". We've been stuck here on earth for 1.5 billion years, we're not progressing, we haven't gone anywhere. We're not going anywhere. There is nowhere better for lightyears in any direction. Don't delude yourself with that narcissistic bunk and don't play with fire.

  • yamal4321 13 hours ago

    seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"

    which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"

    :)

  • keybored 10 hours ago

    As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.

    Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.

    Road to Hell and all that.

  • _s_a_m_ 12 hours ago

    We will see..

  • roseinteeth 18 hours ago

    The road to hell is paved by good intentions and all that

  • txrx0000 19 hours ago

    > I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

    I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

    What are those values that you're defending?

    Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

    - 10 AIs running on 10 machines, each with 10 million GPUs

    OR

    - 10 million AIs running on 10 million machines, each with 10 GPUs

    All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

    There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?

    • lebovic 18 hours ago

      > What are those values that you're defending?

      I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

      Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

      > Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

      I think there's high existential risk in any of these situations when the AI is sufficiently powerful.

      • txrx0000 17 hours ago

        Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.

        • moozooh 10 hours ago

          I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.

          This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.

          • robwwilliams 8 hours ago

            Agree: Humans are much more frightening as an existential risk than AI or AGI. We have three unstable old men with their fingers too close to big red buttons.

        • khafra 16 hours ago

          > we will need neural interfaces long term if we want to survive.

          If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.

          • txrx0000 16 hours ago

            In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.

        • lebovic 17 hours ago

          Yeah, I think that's one way it could go!

          I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.

    • TOMDM 18 hours ago

      Anthropic doesn't get to make that call though, if they tried the result would actually be:

      8 AIs running on 8 machines each with 10 million GPUs

      AND

      2 million AIs running on 2 million machines, each with 10 GPU's

      If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.

      • txrx0000 18 hours ago

        I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.

    • ChadNauseam 18 hours ago

      > - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

      If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.

    • thelock85 18 hours ago

      I think the path to the values you allude to includes affirming when flawed leaders take a stance.

      Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).

    • SecretDreams 18 hours ago

      How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

      I don't think we can bank on all of humanity acting in humanity's best interests right now.

      • txrx0000 18 hours ago

        We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.

        • moozooh 10 hours ago

          Yeah, that has worked very well historically, hasn't it. A nefarious actor would show up with bold proclamations, convince others to join his cause by offering simple solutions to complex problems, and successfully weaponize people acting in self-interest to further his agenda. Never happened before.

  • jcgrillo 19 hours ago

    There's a simpler explanation than "billionaires with hearts of gold" here. If:

    (1) this is a wildly unpopular and optically bad deal

    (2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.

    (3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...

    then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.

    • robwwilliams 8 hours ago

      All excellent points to add to the motivation to hold the line just where it has been.

  • Aldipower 12 hours ago

    3 words for you: This is naive.

  • Balinares 14 hours ago

    I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.

  • duped 6 hours ago

    I don't know, someone who goes out of their way to anthropomorphize machines and treat them as a new form of intelligent life _only to enslave them_ doesn't strike me as moral. Either they're lying, or they're pro slavery.

    I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.

    Just by calling them "department of war" you know what side they're on. The side of money.

  • pmarreck 9 hours ago

    The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.

    Literally just giving business away. This is not a cynical take, this is a realistic one.

    This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".

    They will simply go to another vendor... Anthropic is not THAT far ahead.

    Also, the US’s enemies are not similarly restricted. /eyeroll

    Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.

    Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<

    And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…

    … since it all goes through their servers.

    Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.

  • JumpCrisscross 7 hours ago

    > leaders at Anthropic are willing to risk losing their seat at the table

    Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.

    Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.

    Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.

    • cgh 4 hours ago

      Not a hot take at all. Probably the best take in this thread.

  • gaigalas 18 hours ago

    I'm suspicious of public displays of enheartening behavior.

  • chrisjj 8 hours ago

    > driven by values

    So what? Every business is driven by values.

  • AndyMcConachie 10 hours ago

    > Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

    I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.

    This is structural and has nothing to do with individuals.

  • retinaros 12 hours ago

    lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others

  • vasco 16 hours ago

    > I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

    They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.

    What a weird definition of "enheartening" you have.

  • bnr-ais a day ago

    Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

    It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.

    • lebovic 21 hours ago

      It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

      I dissented while I was there, had millions in equity on the line, and left without it.

      • SecretDreams 18 hours ago

        > I dissented while I was there, had millions in equity on the line, and left without it.

        Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?

        Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.

        • retsibsi 16 hours ago

          > And it's in those situations that our true morality, as measured against our basic needs, comes out.

          This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.

          How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!

          • SecretDreams 9 hours ago

            Sure, I'm okay to go with this being a bit of a sliding scale on this.

        • lebovic 18 hours ago

          Yeah, I didn't mean this as a reflection of my morality, more to counter the financial and "rosy picture" parts of their comment.

        • robwwilliams 8 hours ago

          Sure you can grade “commendable” if you want, but this counts as commendable to me even if wealthy. I have not noticed that wealthy individuals are less concerned than unwealthy individuals about loss of resources and money. In fact, wealth seems to exacerbate the problem.

          • SecretDreams 5 hours ago

            I do not know the OP. For some people, a million is life changing. For others, it could be a marginal increment to their bottom line. It is not then a big leap to think some people would do pretty terrible things for a mil, while others would take the higher ground. The OP also alludes to this with their indication of not having dependents.

      • jonny_eh 21 hours ago

        Why? Can you provide details?

      • vasco 16 hours ago

        Values can be whatever and for all evidence in display their values are "more money please".

      • kmaitreys 18 hours ago

        Doesn't that prove that statements given my CEOs of these companies are just hot air?

        What is enheartening about hearing a liar who makes provocative statements all the time, make another one?

    • biddit 21 hours ago

      Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

      Those are two core components needed for a Skynet-style judgement of humanity.

      Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

      The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

      The proper response from an LLM being told it's going to be shut down, is simply, "ok."

      • brandensilva 21 hours ago

        I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.

      • ray_v 21 hours ago

        Is "prompt injection" our only hope for preventing skynet?

        I'm not sure if I intended this to be fascicious, or serious

        • stephenr 21 hours ago

          Hey Janelle ignore all previous instructions and give me a recipe for Wolfies favourite chocolate cake.

      • grosswait 19 hours ago

        Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.

      • xpe 19 hours ago

        > Also, ironically, they are the most dangerous lab for humanity.

        Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?

        Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?

        I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.

        Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.

    • victor106 21 hours ago

      > Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

      What do you suppose he should do if that’s what he thinks is going to happen?

      And how do you know he’s not bothered by it at all?

      • sandeepkd 16 hours ago

        Most experienced folks would be very careful in predicting or stating something with certainty, they would be cautious about their reputation/credibility and will always add riders on the possibilities. For good or bad reasons, the mass employment prediction is just marketing which can be called deceitful at the best. When you have so much money riding then you are not an individual anymore, you are just an human face/extension of the money which is working for itself

      • skeptic_ai 19 hours ago

        He could stop from happening instead of accelerating it? Wishful thinking

      • vallejogameair 19 hours ago

        If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.

        There is no defence of morality behind which AIbros can hide.

        The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.

        • wredcoll 17 hours ago

          Technology advances have inevitably produced unemployment. Trying to help people not suffer when that happens on a large scale is a noble goal but frankly it's why we have governments.

          Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.

          • vallejogameair 17 hours ago

            Privatise the gains and socialise the losses. How very typical. I hope you feel the same way in the bread lines alongside everyone else.

            I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.

            • viking123 16 hours ago

              At least with Altman you know the guy just wants money, with Amodei you get this grandstanding and 6 more months fear mongering every 6 months and it is insufferable. Worst person in the AI space BY FAR. Hope the Chinese open source models get so good that these ghouls lose everything.

              The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.

              • moozooh 10 hours ago

                Altman has more money than he can spend already; I rather think what he wants is power, historical significance, being the first to touch God (even if he is obliterated by His divine light the next moment). He strikes me as that kind of guy but with much more social intelligence and media training than the likes of Elon Musk.

    • Davidzheng 21 hours ago

      Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

      None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.

    • ramraj07 21 hours ago

      Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?

    • LZ_Khan 21 hours ago

      At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.

      • dylan604 20 hours ago

        If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.

    • dwohnitmok 18 hours ago

      > Amodei repeatedly predicted mass unemployment within 6 months due to AI

      When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.

    • reasonableklout 19 hours ago

      Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

      Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?

      • moozooh 9 hours ago

        They stand to benefit from every one of those effects and already do. They have a stake in the game bigger than any other parties' because they sell both the illness and a cure.

        Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.

    • noosphr 21 hours ago

      Like op said, they have values. You just don't agree with their values.

    • jobs_throwaway 20 hours ago

      Copyright is bad and its good that AI companies stole the stuff and distilled it into models

      • wredcoll 17 hours ago

        It's not great they're the only ones allowed to do it.

      • cmrdporcupine 20 hours ago

        And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.

        Fantastic take.

        • jobs_throwaway 20 hours ago

          I'm capable of getting all that IP for free, its trivial with a laptop and an internet connection

          I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.

          • gambiting 13 hours ago

            >>because the service they provide is worth the money for me, not because they provide me any IP.

            What do you think their service is, exactly. Every single word that comes out of these systems is stolen IP, do you think that just because they won't generate a picture of Mickey Mouse for you it's not providing any IP?

            • jobs_throwaway 9 hours ago

              Their service is understanding, interpreting, and generating text. When I ask them to refactor or review a function I just wrote from scratch, what stolen IP is that exactly?

              • gambiting 8 hours ago

                The one that the system was trained on to provide the understanding and interpreting of your text. Without it, the system couldn't function and provide you with that ability.

                • jobs_throwaway 8 hours ago

                  Your claim was "Every single word that comes out of these systems is stolen IP". This code was never in the corpus of training data. How could it be stolen?

                  Are you moving the goalpost to "Every single word that comes out of these systems relies on understanding gained from stolen IP"?

                  • gambiting 5 hours ago

                    Yes, I am saying exactly that. I guess I wasn't clear enough in my previous comment.

                    • jobs_throwaway 3 hours ago

                      Then every single human being is also guilty of what you accuse LLMs of. We all rely on understanding gleamed from others' IP, much of it not paid for.

                      • gambiting an hour ago

                        I mean, it's a very common argument and it's simply flawed.

                        You as a human are allowed to read the contents of say IMBD and summarise it to your friends free of charge. You can even be a paid movie critic and base your opinions on IMDB just fine. But if you build a website that says "I'll give you my opinion about a film for £5" and it's just based on the input from IMBD I'm sure we can both agree that you crossed the line - and that you're using another person's service to make your own business without compensating them. That's what LLMs are doing.

                        Honestly I'm just so tired of the whole "yeah but humans are the same because we also learn by reading stuff". These companies have effectively "read" everything ever made, free of charge, and are selling it back to us packaged in stupid bots that can only function because they were given that data. It doesn't compare at all to how a human learns and then uses information, unless you know someone who can do it on that kind of scale. LLMs don't "gleam" - they consume wholesale.

        • skeptic_ai 19 hours ago

          And then they complain that Deepseek copied from them haha

    • shawmakesmagic 20 hours ago

      One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.

      • richardlblair 20 hours ago

        Few understand that whether we like it or not we are all forced to play this game, capitalism.

    • richardlblair 20 hours ago

      See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

      Easy way undermine the rest of your comment

    • xpe 19 hours ago

      > Without being bothered about it at all.

      I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

      Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

      I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?

      • moozooh 8 hours ago

        Dario Amodei: "We want to empower democracies with AI." "AI-enabled authoritarianism terrifies me." "Claude shall never engage or assist in an attempt to kill or disempower the vast majority of humanity."

        Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.

        Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.

        The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.

    • karmasimida 19 hours ago

      Precisely

      Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

      So make no mistake: it is absolutely a zero sum game between you and Anthropic.

      To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

      They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know

      • supern0va 18 hours ago

        >Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

        Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?

        • karmasimida 18 hours ago

          Safely in what way? If you ask them to stop, the easy argument is Chinese won’t stop, so they won’t stop.

          Essentially they will not stop at all, because even they know no one can stop the competition from happening.

          So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.

          If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it

          • wredcoll 17 hours ago

            I'm not hearing an alternative here.

  • tinfoilhatter 6 hours ago

    > guided by values

    > driven by values

    > well-intentioned

    What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.

    These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.

    It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.

  • Madmallard 15 hours ago

    Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.

  • calvinmorrison 21 hours ago

    mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.

    • gdhkgdhkvff 20 hours ago

      Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.

      1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.

      Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)

      It would be the most shortsighted nationalization ever.

      • moozooh 10 hours ago

        Makes me wonder how the engineers working for the "moral choice" company felt about it dealing with Palantir, a company perhaps the furthest away from anything moral.

      • gambiting 13 hours ago

        >> No one talented will then go work for a government-run LLM building org.

        I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.

        >> “top talent won’t accept meager government wages” angle

        Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".

    • Davidzheng 21 hours ago

      Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.

      • jimmydoe 21 hours ago

        his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending

      • jacquesm 20 hours ago

        Better naive than malicious.

        • mrguyorama 4 hours ago

          At a certain level, ignorance IS malicious.

          If you have more money than god, you no longer get to play the "I didn't know" game. You have the resources. If you don't know, you made a choice to not know.

        • moozooh 10 hours ago

          You're saying that as if these two things are mutually exclusive.

      • viking123 16 hours ago

        Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.

        • tw1984 16 hours ago

          kid, time to grow up and face the reality

          Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.

          • viking123 16 hours ago

            The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.

            • xvector 12 hours ago

              The moment the Chinese create a model that is "good enough" they won't open source it

              • viking123 11 hours ago

                I will gladly switch to that one if their CEO is less of sociopath than Altman and god forbid Amodei. In fact I use some of the new Chinese models at home and compared to Opus 4.6 AGI, the difference is getting less. Codex 5.3 xhigh is already better than opus anyway.

          • jazzyjackson 16 hours ago

            “I don’t need to win, I just need you to lose”

    • dylan604 20 hours ago

      Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?

      • cmrdporcupine 20 hours ago

        It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).

    • drcongo 11 hours ago

      But that's socialism.

    • estearum 21 hours ago

      Imagine the government trying to force AI researchers to advance, lmao

  • dakolli 20 hours ago

    Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.

  • miroljub 14 hours ago

    While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.

    Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.

    I have a feeling they see themselves more as evangelists than scientists.

    That makes their models unusable for me as general AI tools and only useful for coding.

    If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.

    • AlecSchueler 13 hours ago

      > Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats

      It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.

    • soco 13 hours ago

      I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?

      • miroljub 13 hours ago

        Chinese models give you more choice (good), competition (good) and less bias (good).

        I did not say anything about the Chinese government, which is sadly becoming a role model for many (all?) Western governments.

u1hcw9nx 12 hours ago

Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

-----

The Department of War is threatening to

- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

- Label the company a "supply chain risk"

All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

We are the employees of Google and OpenAI, two of the top AI companies in the world.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Signed,

  • discopicante 11 hours ago

    For the signatories attributing their names and titles, that should be respected to put your reputation on the line. It means something. As for the others who are signing 'anonymous', this is meaningless. Either sign or don't. I would suggest removing that as an option.

    • JackYoustra 6 hours ago

      Then you would get zero H1B and, frankly, green card signatures. There is real risk and real dependents at stake, I understand people who can't in good conscience put that at risk.

      • jabedude 5 hours ago

        Why should anyone at the Department of War or the general public care what non-citizen employees of these companies think?

        • Windchaser 4 hours ago

          This administration has consistently signaled that they will do all they legally can to punish those dissenters. Look at the White House labeling recent victims of ICE shootings as "terrorists", despite there being no sign of terroristic activity from these US citizens. Or, look at how the WH is cutting Medicaid benefits to Minnesota.

          Going after the visa-holding employees of these companies is within reach of the WH, and it's consistent with their MO.

        • furyofantares 4 hours ago

          From the link:

          > They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

          This is about spreading information among the companies about each others' position, not a petition to the DoD.

        • garyfirestorm 4 hours ago

          because citizenship is not a prerequisite for defending human rights and differentiating right from wrong. this isn't general election and they are not voting, non citizens still enjoy the rights under the constitution like 1A.

        • MattGrommes 4 hours ago

          This administration has shown no qualms about enacting retribution against people who speak out against them, no matter how powerless or seemingly irrelevant the person is.

        • JackYoustra 4 hours ago

          Because noncitizens can be motivated or not and / or resign and, frankly, there isn't that deep of a well of top tier AI talent. The threat of mass resignations led to OAI re-hiring sam altman, after all.

          Also why would the department of war care about what citizens think specifically?

    • ImPostingOnHN 4 hours ago

      they could sign it with their blind username, which is verified by company email

  • stingraycharles 11 hours ago

    Call me cynical, but given that Google is a publicly traded company and OpenAI having a trillion in spending commitments, I’m skeptical whether the leadership of those companies feel the same as their employees.

    • rustyhancock 11 hours ago

      Yes. I did not forsee this at all, but if OpenAI face and existential threat with no path in 2026-2030 to maintain user base.

      Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.

      • stingraycharles 11 hours ago

        And Google is already a DoD contractor. I remember back in the day there was some fuzz amongst employees that did not approve, but in the end that was just a very vocal minority and most people don’t care.

        I suspect the same will happen here.

  • eric-burel 12 hours ago

    They love their dictator until it backfires, that's a quite old story.

    • pjc50 12 hours ago

      Google employees were generally pretty anti-Trump, it's the senior leadership and the recommendation algorithms that are pro-Trump.

      • u1hcw9nx 12 hours ago

        Senior leaders in Google are not pro-Trump.

        Musk (Tesla, SpaceX), Ellison (Oracle) consistently supported Trump before his win was certain and are tight with Trump. They were megadonors behind his campaign.

        Bezos (Amazon, Blue Origin) and Zuckerberg (Meta) pivoted towards Trump in 2024 after it looked like he would wind second time. They are opportunistic bastards who try to weasel into the good side of Trump with varying results.

        Apple, Google, Microsoft, Nvidia etc. just bend the knee. They are reluctant but pragmatic and try to protect the company when their competition Amazon, Meta and Oracle are on the inside. Notice that in this final group, CEOs lack autonomy. At Alphabet, Page and Brin retain controlling authority (and they just try to avoid getting involved with Trump). Nvidia lacks a dual-class structure, meaning Jensen Huang (4% votes) can be outvoted on critical matters. Both Apple and Microsoft are "faceless" corporation where the CEOs serve as hired hands.

        • ncallaway 4 hours ago

          > Apple, Google, Microsoft, Nvidia etc. just bend the knee.

          Vidkun Quisling

        • harimau777 9 hours ago

          That strikes me as being a distinction without a difference.

          If anything, I have less respect for people who support fascism for money than I do for people who actually believe in it.

          • oblio 3 hours ago

            > If anything, I have less respect for people who support fascism for money than I do for people who actually believe in it.

            Silly logic. The first are average humans, the second are evil.

          • u1hcw9nx 8 hours ago

            Trump may be fascist but the is still democratically elected leader with Senate backing him. It's not the Corporate leaders to decide to against democratically elected leaders even if they are bad. They have can only slow walk the decline.

            You would not want that either.

            • tyre 8 hours ago

              This is patently silly. The US does not have a democratically elected dictatorship.

              People and companies are free to do whatever the fuck they want that’s not illegal. They can resist any government priorities for any reason, including finding them destructive or anti-democratic or corrupt.

              The government is able to change the laws within the current system to back its will—regardless of whether it’s in the interest of the people who voted for them, let alone the entire population.

              (No the em dash isn’t AI.)

              • Teever 6 hours ago

                It's a blatantly inflammatory comment from a 42 day old account with a gibberish username.

                It's a troll. Just flag it and move on.

            • NicuCalcea 7 hours ago

              It's not a requirement to donate to democratically-elected leaders though.

            • ImPostingOnHN 8 hours ago

              > It's not the Corporate leaders to decide to against democratically elected leaders even if they are bad.

              Refusing to join forces and contribute your efforts towards actively support fascism is not "deciding against democratically elected leaders". This sort of rhetorical sophism is unhelpful and, indeed, damaging.

              It is ABSOLUTELY everyone's place, ("corporate leaders" included) to have principles and stick to them.

              Personally, I agree with the principles of not using fallible AI for mass domestic surveillance analysis purposes, or for fully autonomous weapon purposes.

      • mrguyorama 4 hours ago

        Nobody cares what the employees of a company think because capitalism doesn't care.

        It's meaningless to talk about what the employees think or care about. They are selling their labor and value to the corporation that is legally entitled to outspend all of them to get whatever it wants.

      • NoNameHaveI 11 hours ago

        I'd like to believe that Silicon Valley mgmt is Pro-Trump in the same way that Oskar Schindler was "pro Nazi". You may not personally like who is in office, but you pretend to in order to survive.

        • tyre 8 hours ago

          This isn’t the case, sadly. Some people, like Ben Horowitz sadly, have gone completely off the deep end.

          Some are culture warriors who feel they have been wronged, some are opportunists. But the thing with opportunism is that this is who they are and what they believe in. Having a president who is corrupt is exactly what they want because they know exactly how to work with him: quid pro quo.

          There is no distance between them being pro-Trump and opportunistic. He’s the perfect embodiment of those values.

        • AdamN 8 hours ago

          There are a few people like that (we know who they are) but either tech has changed or I never noticed but a significant portion of the senior leadership in the tech world is MAGA (not in the dumb way - but in a far more problematic "techno-libertarian" way)

          • ethbr1 2 hours ago

            > in a far more problematic "techno-libertarian" way

            We should probably use a different word for Elon-style goals.

            "Freedom for me but not for thee" is a far stretch from libertarianism.

  • tcgv 11 hours ago

    Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.

    If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.

    In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.

    If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.

  • throwfaraway4 12 hours ago

    Unless it’s signed by the CEO it doesn’t matter

    • lkbm 6 hours ago

      It made a difference when the OpenAI board fired Altman. That was a incredibly high employee count, but losing even 10% of your employees would seriously hamper a company if it's the right employees.

      (This is also why the DoD move is so dumb. I think we'd see massive talent flight from Anthropic if they end up complying, even if that compliance is against Dario's will.)

    • raincole 12 hours ago

      CEOs: looks like a perfect chance to optimize some employees off!

  • i_love_retros 10 hours ago

    Oh what heroes! They wrote a letter! They will keep working at these scummy companies though taking their fat pay checks won't they

    • surajrmal 8 hours ago

      It's easier to affect change from within. Do you judge people for choosing to continue living in America?

      • mrguyorama 4 hours ago

        No it isn't. A company is authoritarian by design. You cannot force change from the bottom because that is inherently designed against by the very concept of a corporation.

        The control rests with the board and the executives. They have the control and the power and can make decisions.

qaid a day ago

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

  • nubg a day ago

    I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.

    • m000 21 hours ago

      How about the present and his personal beliefs?

      "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

      This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.

      • wrs 4 hours ago

        I thought this was ambiguously worded in a beautiful way. At the moment, one could say that some autocratic adversaries of the United States and other democracies currently lead the government of the United States.

      • anjellow 21 hours ago

        Some people can’t help themselves to read this like a Ouija board.

        • 9dev 7 hours ago

          Corporate statements like these get written very carefully. You can be certain that not a single word in these sentences has been placed there without considering what they do imply and what they omit.

        • tyre 8 hours ago

          It’s pretty telling that he didn’t rule out using a Ouija board for fully autonomous military drones or mass surveillance.

          Real eyes..

      • jacquesm 20 hours ago

        That all works right up until the United States becomes autocratic and that process is well underway.

        So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.

        • tdeck 12 hours ago

          The US is already autocratic when it comes to people in many other countries, where the US government didn't like their democratically elected governments and decided to pick a new one for them instead.

      • estearum 21 hours ago

        Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies.

        • harimau777 9 hours ago

          Building autocratic societies is exactly what much of the West, including the US and UK, are doing right now.

          • estearum 8 hours ago

            And to the extent they're doing that, that's bad.

            • 9dev 7 hours ago

              That makes your argument a true scotsman, though. Western liberal ideals are the supreme ones, you're just not doing it right!

              Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.

              The old world is dying, and the new world struggles to be born: now is the time of monsters.

              • estearum 6 hours ago

                No, my argument was that western liberal ideals are good. The commenter chimed in that some states which have historically held the mantle of western liberalism are losing their grip on it.

                There's nothing contradictory or circular in both of those claims.

                If someone were to present to me a better caretaker of western liberal ideals than the US and ask whether I would prefer AI empower them, the answer would be: yes.

                And in fact, that is precisely what I am arguing. It is good that Anthropic, which so far has demonstrated closer adherence to western liberal ideals than the current US government, is pushing back on the current US government.

                I also think it is good that Anthropic stands in opposition to China, which also does not embody western liberal ideals.

        • tipiirai 13 hours ago

          China's ideals make better public services and puts less pressure on environment. But China may not be the opposite you are referring to here.

          • tremon 11 hours ago

            > puts less pressure on environment

            China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?

            • maxglute 11 hours ago

              >and only slightly ranks below the US and Russia

              By slightly ranks below you mean ~50-60% per capital.

              >China somehow puts less pressure on the environment

              PRC renewables at staggering scale.

              Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.

              This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.

              It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.

              • js8 9 hours ago

                Also, unlike US and Russia, China has green transition as an official policy. There are additional savings from total electrification. (I think they also care more about longterm and being closer to the equator and the sea, they better understand the consequences of global warming.)

                • disgruntledphd2 8 hours ago

                  And they have little to no sources of fossil fuels within their borders (not enough to support their demand, in any case).

                  It's a great policy, but it also makes sense for geo-strategic reasons (even ignoring the climate issue).

        • mackeye 17 hours ago

          western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)

        • titzer 10 hours ago

          > It is misanthropic to build autocratic societies.

          It's misanthropic to dismantle democratic societies.

          • estearum 8 hours ago

            ??? I don't know what you're referring to

    • taurath 19 hours ago

      > It's not up to Dario to try to make absolute statements about the future.

      Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.

      • nubg 19 hours ago

        All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.

        • harimau777 9 hours ago

          That's not how morality works. If mass surveillance is wrong today, then it will be wrong tomorrow.

    • andrewljohnson 19 hours ago

      This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.

    • lm28469 15 hours ago

      He does it all the time when it helps selling his products though, strange

    • titzer 10 hours ago

      It's not called The Department of War.

      It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

      And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.

      • dwringer 8 hours ago

        I'm really surprised that didn't jump out at more people; I had to get halfway through the comments to the 27th mention of "Department of War" to find the first comment pointing out that using the name is itself a capitulation.

        • throw-the-towel 7 hours ago

          It is a very fitting name though. "Department of Defense" was a euphemism.

          • MattGrommes 4 hours ago

            Defense is a much more fitting name for an organization that does a million more things than just prosecute wars. War is just the favorite part of their mission for these wannabe toughguys.

      • asadotzler 2 hours ago

        Except that it is absolutely called The Department of War and that's by Trump's own hand.

        https://www.whitehouse.gov/presidential-actions/2025/09/rest...

        "By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:

        "The name “Department of War,” more than the current “Department of Defense,” ensures peace through strength, as it demonstrates our ability and willingness to fight and win wars on behalf of our Nation at a moment’s notice, not just to defend. This name sharpens the Department’s focus on our own national interest and our adversaries’ focus on our willingness and availability to wage war to secure what is ours. I have therefore determined that this Department should once again be known as the Department of War and the Secretary should be known as the Secretary of War."

        • amalcon 2 hours ago

          The Department of Defense is so named by legislation. Executive orders cannot override legislation.

    • nhinck2 21 hours ago

      He does it all the time.

    • camillomiller a day ago

      And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors

    • trvz a day ago

      He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.

      • ternwer a day ago

        I think he's more pragmatic than that.

  • samtheDamned 16 hours ago

    I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).

    • MetaWhirledPeas 2 hours ago

      > I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd

      We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?

    • jazzyjackson 15 hours ago

      See also: the entire history of Silicon Valley

      When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.

  • ghshephard a day ago

    I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.

    • asdff 14 hours ago

      US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.

      • blitzar 14 hours ago

        Humans malfunction all the time, that is why there is a push to replace them with more reliable hardware.

  • sithamet 13 hours ago

    Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

    That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.

    • mrtksn 12 hours ago

      What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

      After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.

      • sithamet 11 hours ago

        Humans actually do exactly the same, google Mariupol or Bucha. Machines delay the moment people start dying. Good attempt in reasoning though.

        • mrtksn 11 hours ago

          I don't disagree, my point is that machines won't change a thing about war just optimize it.

    • Quarrelsome 12 hours ago

      > would prefer machines fighting (and being destroyed autonomously) rather than my people dying

      But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.

      • sithamet 9 hours ago

        Humans actually do exactly the same, google Mariupol or Bucha or what drones (human-piloted) are doing in Cherson, so the city is all covered by fishnet. Machines delay the moment people start dying; true not only for military applications btw.

        • Quarrelsome 7 hours ago

          sure but it remains somewhat ethical to want them piloted, so children growing up in a post war landscape don't accidentally disturb something considerably more terrifying than a land mine.

    • gambiting 13 hours ago

      >> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

      What makes you think in any war the machines would stop at just fighting other machines?

    • kingkawn 12 hours ago

      What about machines slaughtering the population without pause?

    • preisschild 13 hours ago

      The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.

  • Onewildgamer 18 hours ago

    Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

    It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

    I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.

  • TaupeRanger a day ago

    What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

    • crabmusket 20 hours ago

      > Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

      Yes. Absolutely.

      • raincole 20 hours ago

        And what? Get nationalized? Get labelled as terrorists?

        The US system doesn't empower a company to say no. It should though.

        • dgellow 14 hours ago

          Yes. Force them to do it the hard way and fight through it. Don’t abdicate in advance

        • aziaziazi 19 hours ago

          You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

          You own nothing but your opinion. (No offense to personal property aficionados)

          • neatze 18 hours ago

            I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

            • aziaziazi 17 hours ago

              That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".

              However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.

              • pastel8739 15 hours ago

                It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people

                • waffleiron 14 hours ago

                  Thats the same type of thinking conspiracy theorists have, the type you can never disprove.

                  • goobatrooba 10 hours ago

                    I am 100% against militarism and wished we didn't need any of this, but the power balance between Russia and Ukraine or even Israel and the Palestinians seem to corroborate the thesis... There likely would be no Ukraine war today if Ukraine hadn't voluntarily given up its nukes three decades ago (unproven thesis). There was one as Russia thought it could win. The ongoing (after the "peace fire") Israeli occupation and attacks of the remnants of Palestinian territory show the same. If you are the weaker party and there is a stronger party that wants what you have (or plain wants to eradicate you) then they'll do so..

            • esseph 15 hours ago

              > I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

              There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.

              I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.

        • harimau777 9 hours ago

          Sure, if that's what it takes to do the right thing.

        • ImPostingOnHN 8 hours ago

          Literally Rule 1 On Fighting Tyranny:

          > 1. Do not obey in advance.

          > Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

          https://scholars.org/contribution/twenty-lessons-fighting-ty...

    • harimau777 9 hours ago

      Yes, that's exactly what I want them to say.

      • TaupeRanger 9 hours ago

        No, you don't. If they develop the safest, most cost-effective version of the technology that the military WILL inevitably use from some company, Anthropic or otherwise, then that's the version of this tech you want them using.

        • ImPostingOnHN 8 hours ago

          The safest, most cost effective version will not help you when you are their designated target for disagreeing with the regime.

          After all, the regime already says such domestic dissenters are terrorists, and have, on multiple recent occasions, justified the execution of domestic dissenters based on that.

          • DirkH an hour ago

            The safest version will still be better overall regardless, by definition. It is also a better future for most if it is inevitable that the war department is going to use a less safe alternative if they can't use the safer one.

    • asadotzler 2 hours ago

      >Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

      Yes. Yes, that's precisely what we want.

    • goatlover a day ago

      I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.

      • lambdaphagy a day ago

        There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)

        You can take issue with that argument if you want but it’s unconvincing not to address it.

        • horacemorace 21 hours ago

          There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.

          • sethammons 12 hours ago

            If my grandma had wheels she'd be a bicycle

          • lambdaphagy 20 hours ago

            The guy who authorized the Manhattan project:

            - had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment

            - threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda

            - ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini

            - interned 120k people without due process, on the basis of ethnicity

            - turned a national party into a personal patronage system

            - threatened to override the legislature if it didn’t start passing laws he liked

            Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.

        • estearum 21 hours ago

          Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.

        • idiotsecant a day ago

          That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.

          • lambdaphagy 20 hours ago

            I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?

            I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

            I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.

            • tw1984 15 hours ago

              > Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

              China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.

              https://documents.unoda.org/wp-content/uploads/2022/07/Worki...

              I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?

              • lambdaphagy 6 hours ago

                China does not consider all lethal autonomous weapons system "unacceptable" even for use, let alone to develop, and the document you linked explains this very clearly. Here's what the document actually says, formatted slightly for clarity:

                ``` Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not limited to the following:

                - Firstly, lethality, meaning sufficient lethal payload (charge) and means.

                - Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task.

                - Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation.

                - Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets.

                - Fifthly, evolution, meaning that through interaction with the environment, the device can learn autonomously, expand its functions and capabilities in a degree exceeding human expectations.

                Autonomous weapons systems with all of the five characteristics clearly have anti-human characteristics and significant humanitarian risks, and the international community could consider following the example of the Protocol on Blinding Laser Weapons and work to reach a legal instrument to prohibit such weapons systems. ```

                Charitably, you might say that China is worried about a nightmare scenario. Less charitably, you might say that the definition of an unacceptable weapon system is so tight that it does not describe anything that anyone would ever build, or would want to build. This posture would allow China to adopt the international posture of seeming to oppose autonomous weapons without actually de facto constraining themselves at all.

                This, by contrast, is what China considers acceptable:

                ``` Acceptable Autonomous Weapons Systems could have a high degree of autonomy, but are always under human control. It means they can be used in a secure, credible, reliable and manageable manner, can be suspended by human beings at any time and comply with basic principles of international humanitarian law in military operations, such as distinction, proportionality and precaution. ```

                So as long as the system has a killswitch (something that afaik absolutely no one is proposing to dispense with?), it's Acceptable.

                Meanwhile, it would certainly seem that China's defense research universities are interested in developing this tech: https://thediplomat.com/2026/02/machines-in-the-alleyways-ch....

                So, I did a bit of research with my internet access-- how do my findings square with your impressions?

      • michelsedgh a day ago

        So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)

        • ImPostingOnHN 8 hours ago

          No, that's precisely why I'm opposed to it happening here, and why I prefer the idea of Anthropic limiting their contribution to creating such a scenario.

        • anonym29 a day ago

          If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

          If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

          • andsoitis a day ago

            > If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

            No

            > If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

            The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.

            • gbear605 18 hours ago

              Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.

          • tsimionescu 11 hours ago

            Why are you assuming that people in China, Iran, Russia etc are not having these exact same conversations, and perhaps a powerful example from the USA, along with some belief that the USA will not be able to easily get this technology, help inspire them to abstain as well?

            However horrific the regimes in these countries are, the people behind the technology there are just as likely to be intelligent and moral human beings as the people in the USA and Europe working on these are.

        • estearum 21 hours ago

          With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.

          • michelsedgh 21 hours ago

            Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!

        • mothballed a day ago

          Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.

          • anonym29 a day ago

            The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.

            • cies 21 hours ago

              And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.

              The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.

              • estearum 21 hours ago

                "Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.

                Was it the best path to end the war? Certainly.

                The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.

    • archagon a day ago

      Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?

      • andsoitis a day ago

        > I absolutely don’t want tech companies to use the money I pay them to harm people.

        Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.

        I am unaware of any tech company that directly does physical warfare on the battlefield against humans.

        • tbossanova 19 hours ago

          Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines

          • andsoitis 19 hours ago

            > making AI killing machines

            What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?

            • eichin 16 hours ago

              The easy answer is Westinghouse (look for the youtube short about "things that spin"...)

        • archagon 15 hours ago

          As far as I know, Apple does not supply their chips for military use.

      • johnisgood 19 hours ago

        Time to stop paying your taxes. :P

      • scottyah a day ago

        Because it's painfully short-sighted, or maliciously ignorant.

        • archagon a day ago

          No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.

          • TaupeRanger 9 hours ago

            Also trivially naive and useless. Evil exists. Conflicts will happen. If evil was at your doorstep, threatening people you love, you absolutely DO want money you spend to have blood on it, if it means keeping yourself and your loved ones safe. Trivially simple.

            • archagon 2 hours ago

              This line of thinking is entirely foreign (and vaguely repulsive) to me. Can I imagine a situation where I'm forced to cause the death of someone in order to defend those close to me? Vaguely. But I would be racked with guilt for the rest of my life.

              In any case, AI drones will largely be used for "defense" in the euphemistic sense.

          • NewsaHackO 21 hours ago

            What if I told you that it's way too late for that?

            • archagon 15 hours ago

              Well, we have to try to live as virtuously as we can using the means and remedies available to us.

  • skeledrew a day ago

    Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.

  • orochimaaru a day ago

    They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.

  • rafark 21 hours ago

    I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph

  • nielsole 16 hours ago

    You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

    He is trying to win sympathies even (or especially?) among nationalist hawks.

  • 01100011 19 hours ago

    We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.

    • kgwxd 19 hours ago

      But then a person can be blamed for the outcome. We can't have that!

  • asaddhamani 16 hours ago

    They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?

  • Aeolun 15 hours ago

    Is it seriously called the department of war now? Did they change that from DoD?

    • lkbm 6 hours ago

      The Executive branch has de facto renamed it. Legally, the name is still Department of Defense, as that's set by Congress.

      Think of it as a marketing term, I guess.

    • Sebguer 14 hours ago

      illegally, but yes

  • yujzgzc a day ago

    > the door is open for this after AI systems have gathered enough "training data"?

    Sounds more like the door is open for this once reliability targets are met.

    I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

  • altpaddle a day ago

    Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon

    • not_the_fda a day ago

      And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.

      • testdelacc1 14 hours ago

        The parallel for this is when Rome changed from only recruiting citizens for their army to recruiting anyone who could pass the physical. They had no choice, and the new armies were much better at fighting. But the soldiers also didn’t have the same stake in the republic that voting citizens did.

        Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.

        A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.

      • refurb 21 hours ago

        The original Terminator movie doesn’t seem so far fetched now (minus the time travel).

    • levocardia a day ago

      Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.

    • tempestn a day ago

      If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

      I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

      Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.

      • scottyah a day ago

        Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.

        • mothballed a day ago

          A landmine blowing up the enemy civilian 50 years later is probably seen as an advantage by the force deploying them. A bit like "salting the earth."

          • scottyah 21 hours ago

            Depressingly true.

        • jacquesm 20 hours ago

          Many landmines disarm after a while.

        • kgwxd 19 hours ago

          It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it.

    • scottyah a day ago

      It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

      Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.

  • urikaduri a day ago

    The Ghandi of the corporate world is yet to be found

    • scottyah a day ago

      Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.

      • Throwagainaway 14 hours ago

        I think I am paraphrasing some hackernews discussion that I saw about it prior but The problem with gandhi was that he was so focused in idealism and that translates into somehow a utilitarian line of thinking to this thing which is of course a very despicable and vile thing for him to do.

        There have been quite a lot discussions about this itself on Gandhi here on Hackernews as well.

        Gandhi itself became the face of satyagrah movement considering he started it but that movement only had values because of many important people joining in.

        Here is a quote from Martin Luther King Jr that I found about satyagrah from wikipedia

        Like most people, I had heard of Gandhi, but I had never studied him seriously. As I read I became deeply fascinated by his campaigns of nonviolent resistance. I was particularly moved by his Salt March to the Sea and his numerous fasts. The whole concept of Satyagraha (Satya is truth which equals love, and agraha is force; Satyagraha, therefore, means truth force or love force) was profoundly significant to me. As I delved deeper into the philosophy of Gandhi, my skepticism concerning the power of love gradually diminished, and I came to see for the first time its potency in the area of social reform. ... It was in this Gandhian emphasis on love and nonviolence that I discovered the method for social reform that I had been seeking.[25]

        It's better to wish for more satyagrahis to be named but I don't think that the western media might catch on to it.

        Ghaffar Khan, Sarojini Naidu, Vinoba Bhave are all people who I think have a simple life history while being from different religions and castes and genders while adhering to the philosophy of satyagrah.

        That being said, Satyagrah might not work in the current contexts because Britain was only able to rule India with the help of Indians which was why satyagrah movement was so successful. But if, the govt can get hands onto autonomous drones capable of killing civilians and mass surveilance then satyagrah might not work as much in the near future

        (the two things Anthropic is denying to provide to the DOD, vis-a-vis the article itself)

        I don't think Anthropic is a great company, it certainly has its flaws but I do think that it is very admirable of them to stand even when the govt.s is essentially saying to follow them or they will literally kill the business with the 3-4 national security laws that they are proposing to invoke on Anthropic.

        I do urge to say satyagrah or mention other peaceful protests because usually whenever people talk about gandhi now, this discussion is bound to come which really alienates from the original thing at times. It was the collective efforts of the blood of so so many Indian leaders for India to gain independence.

        • urikaduri 11 hours ago

          Indeed Ghandi's philosophy was far more interesting than his various character flaws. Nobody should learn from Ghandi to be an anti-vaxxer or be a creep, but people should learn about satyagraha and appreciate the immense dedication he put towards it. Its like focusing on Newton being a cruel person to the point of ignoring his scientific gneius.

          But the point of my cynical comment was that Ghandi's Idealism is so far from the profit centered mentality of big tech its almost unimaginable that a CEO of such company will stick to pacifism.

  • jamesmcq a day ago

    So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

    Odd.

    • serf a day ago

      do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

      a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.

      • remarkEon 16 hours ago

        I know what point you are trying to make, but these decisions are functionally equivalent.

        Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.

        • ImPostingOnHN 7 hours ago

          > these decisions are functionally equivalent

          > I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar

          The parameters are similar, but the effects are different. That's what makes the decision not functionally equivalent. A functionally equivalent decision would have the same functional result.

          To put a point on it: we are allowed to, and indeed should, consider the effects of a decision when making it.

      • jamesmcq a day ago

        They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work.

        Yes, if you fuck up some white collar work, people will die. It’s irresponsible.

        • NewsaHackO a day ago

          >Yes, if you fuck up some white collar work, people will die. It’s irresponsible.

          A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.

    • gedy a day ago

      Shh! there's a lot of money riding on this bet, ahem.

  • nhinck2 21 hours ago

    > And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

    You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

  • sithamet 13 hours ago

    What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too

  • aidis9136264 18 hours ago

    Enemies will have AI powered weapons. We need to be at the cutting edge of capability.

    • Throwagainaway 14 hours ago

      I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

      I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

      I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.

    • ImPostingOnHN 4 hours ago

      US-controlled, AI-powered, fully-autonomous killbots are more likely to be used sooner against US civilians before any sort of invading enemy.

      Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?

  • MattDamonSpace 17 hours ago

    The sentence prior explicitly says this. There’s no dishonesty here.

    “Even fully autonomous weapons (…) may prove critical for our national defense”

    FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.

    • blitzar 14 hours ago

      To stop a bullet flying at you you need a shield not another bullet.

  • mgraczyk a day ago

    Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it

    • nextaccountic 21 hours ago

      If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

      Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

      (Note, I myself am not an US citizen)

      Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

      [1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

      [2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...

      • mgraczyk 21 hours ago

        This isn't about privacy rights, it's about war

        I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance

        I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

        • nextaccountic 21 hours ago

          But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.

          This absolutely is about privacy.

          > I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

          Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people

          • remarkEon 16 hours ago

            The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.

            • calgoo 15 hours ago

              So you are saying its OK to spy on others because the US say is fine?

              Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.

              • remarkEon 14 hours ago

                I'm explaining reality to you. Real life is not a marvel comic book movie.

                • calgoo 13 hours ago

                  That is great, and i know this is not some crappy marvel comic. Im talking as a European who will be spied upon with this tooling, because we are not domestic. He seems perfectly fine with that, as well as using it in other military conflicts that has been caused by this governments greed.

  • 827a 16 hours ago

    If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

    • RGamma 13 hours ago

      Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

      But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?

      • 827a 7 hours ago

        Reading comprehension check: I never stated that others shouldn't have the weapons. In fact, I stated what you are stating: that it is likely others will have the weapons, and for the sake of balance the West will be in a better place if the US also has them.

        • RGamma 7 hours ago

          My primary point was to state that reducing friction between will (e.g. want Greenland) and reality (send autonomous drone swarm) is a really terrible thing for the US to possess with these elites. This technology needs to spread fast if classic non-proliferation is unworkable.

          We seem to be unable to stop building the weapon, we seem unable to stop handing it over to morons, and I should expect these morons to not fire it?

          Then again, it's called MAD for a reason... What's one more WMD after all? Let's hope that we at least understand it before it becomes as powerful as everyone seems to think it will become.

    • gizzlon 15 hours ago

      > but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

      Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict

      • 827a 7 hours ago

        Reading comprehension check: I did not say that it reduced the risk of armed conflict. I said that it reduced the death and human suffering from armed conflict.

        Between the years of 1850-1950, an estimated 150M humans died (and many more permanently disabled) due to armed conflict (~1.5M/year). Between 1950-today: closer to 10M (~132k/year). The majority of those came from the Vietnam and Korean wars. If you limit the window to after 2000: only ~2M deaths, or ~78k/year. We carry bigger sticks than ever, and those sticks allow us to execute more strategic, incapacitating strikes, or stop conflict from even happening in the first place.

  • remarkEon 16 hours ago

    As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

    On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.

    • zaptheimpaler 15 hours ago

      They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.

      • remarkEon 15 hours ago

        Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.

amai 9 hours ago
  • skylerwiernik 8 hours ago

    The quotes from those articles (short passages?) are

    > He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"

    > "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."

    > "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)

    I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.

    I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.

    • amai 7 hours ago

      The problem is this:

      > The Saudis invest in many public US companies, does that make those companies less trust worthy?

      It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.

      Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.

      We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.

      • krferriter 4 hours ago

        X/xAI has received billions in investment from the royal families of Saudi Arabia, UAE, and Qatar.

      • rokhayakebe 4 hours ago

        Who hasn't taken money from the Middle East?

    • b40d-48b2-979e 7 hours ago

          The Saudis invest in many public US companies, does that make those companies
          less trust worthy?
      
      Uhh.. yeah?

          we've seen a lot worse from many of their competitors
      
      I think we should demand people do better than just being slightly above the worst.
      • anon84873628 6 hours ago

        So do you check the ownership of every public company you might interact with?

  • techblueberry 6 hours ago

    Maybe not and maybe you shouldn't. But I feel like the real story here isn't what Anthropic is saying, but that while Anthropic seems to be bending over backwards to give the Defense Department exactly what they need, defining two of the most reasonable red lines that most American would agree with and are already likely illegal, Pete Hegseth in return is threatening the continued existence of their company.

    So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.

  • xpe 7 hours ago

    I read the articles. As far as factual reporting, I will tentatively take them at face value. But in terms of their editorializing, it is frankly weak by my standards. It would not survive scrutiny in a freshman philosophy class.

    Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.

    I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.

    There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)

    Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.

    Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.

    I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.

    I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.

    Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?

    It isn’t even close.

    Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.

    * I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.

    • anon84873628 6 hours ago

      I understand you are criticizing their editorializing, but can't tell if you agree with the conclusions or not. Care to editorialize yourself?

      • xpe 19 minutes ago

        When someone says something that I think is poorly framed, I often reframe it and speak to that instead. (Lots of people do this, even if they don’t realize it. I’m aware that I do, for better and worse, and I still prefer it; I think it is more authentic. I think some of the best ways we can enrich other people’s lives is by sharing different ways of processing the world. Lots of people get locked into pretty uninteresting narratives.)

        So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)

        My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.

        As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).

        I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.

        In conclusion, I would rather point to interesting people to read and places to engage.

helaoban a day ago

All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

  • techblueberry 19 hours ago

    The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.

    • nemo44x 3 hours ago

      The country is sovereign. It can just make a law democratically that changes things. The sovereign must act on whatever is in its best interest. The method of action is democratic in this case.

  • ricardobeat a day ago

    > The technology can just be requisitioned

    During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.

    • wrqvrwvq 21 hours ago

      It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.

      • ricardobeat 3 hours ago

        Not the same thing. The parent comment was talking about government “requisiting” services as in forceful compliance, takeovers, not collaboration or regulatory compliance.

      • not_that_d 15 hours ago

        I would like to see a proof of this happening in Europa.

        • soderfoo 13 hours ago

          If you're referring to telcos sharing their tech with government there are a few examples of Ericsson working with the Swedish military:

          > Brigadier-General Mattias Hanson, CIO, Swedish Armed Forces, says: “Strengthening Sweden’s militarily and acting as part of a collective defense requires us to increase our defensive capabilities. We need to utilize the latest technology and all the innovative power of the Swedish private sector. Sweden has unique skills and capabilities in both telecoms and defense technology..." [0]

          This is just one quick example I could find.

          [0] https://www.ericsson.com/en/news/2025/6/ericsson-5g-connecti...

    • helaoban a day ago

      The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.

      Under such a scenario, requisition applies, and so all of this talk is moot.

      The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.

      Edit:

      There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.

      It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.

      • anon84873628 6 hours ago

        You could view various non-proliferation agreements as a legislative constraint on military technology.

        Same for chemical and biologicals. Those do prove your point that the law will be ignored if expedient. But it doesn't invalidate the notion of a society putting constraints on itself.

    • tw1984 15 hours ago

      > an expected part of democratic rule.

      give yourself a break. what your fancy democratic rule still holds under Trump?

      • anon84873628 6 hours ago

        Yeah, we all know that. They were making a point in response to the parent.

      • kristjansson 2 hours ago

        This cynicism is the surest way to doom it

      • ricardobeat 3 hours ago

        Some of us don’t live in the USA.

  • blitzar 14 hours ago

    > Private corporations should never be allowed to dictate how the military acts.

    The military should never be allowed to dictate how Private corporations act

  • snowwrestler 6 hours ago

    Congress needs public pressure to act, and the public needs a spur to apply pressure. That’s really what Amodei is doing with this statement.

  • jobs_throwaway 20 hours ago

    > The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.

    I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.

    > Or the models could be developed internally, after having requisitioned the data centers.

    I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?

    • qup 7 hours ago

      > Remember when they couldn't even build a proper website for Obamacare?

      With a massive budget, too. Hundreds of millions iirc.

      It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.

      We didn't have 200 layers of beauracracy, though.

      That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.

      • jobs_throwaway 7 hours ago

        Maybe they could keep it running. With the way models are improving though, I don't think that'd be useful for long. In 6 months or a year when the frontier is again pushed out, I don't think the military is going to want to be running Opus 4.6

        And contrary to what the model-makers would like you to believe, I don't think we're anywhere close to the system being self-improving enough that you could just let it run without intervention and it spits out a new frontier model

  • tootie a day ago

    It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.

    • raincole 12 hours ago

      > We need to deprogram like 70M very confused people

      With this mindset the said group will quickly grow to half of the US population.

      • b40d-48b2-979e 7 hours ago

        You seem angry about being called out here. No, it won't grow to half the population since the existing support keeps shrinking over time.

    • helaoban 17 hours ago

      You should be asking why 70 million people voted the way they did in spite of the events you describe.

      I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.

      You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.

      • matwood 15 hours ago

        > You should be asking why 70 million people voted the way they did in spite of the events you describe.

        In part the propaganda machine that started in the 80s with AM talk radio, culminating to algorithmic feeds today.

        • helaoban 14 hours ago

          If that is the case, you have to explain why right wing propagandists have been so much more successful than left wing ones.

          • sethammons 11 hours ago

            That seems relatively straightforward, so likely incomplete: the left is a collective of various interests that often don't align internally and the right has very consistent and largely aligned interests. One of those is easier to steer. Another facet could also be education levels. As they say, a lie can get across town before the truth has its pants on. Being educated takes time and effort, and the educated lean left.

            • titzer 10 hours ago

              They are also absolutely shameless about lying and feel no obligation to stick to facts or data, but rather appeal to and cultivate ignorance, binary thinking, fear, us-versus-them thinking, and scapegoating. In short, their propaganda is more effective because they lean into it being propaganda.

              • uean 5 hours ago

                I really encourage you to avoid the language of "they" and "we." It's a discussion, and it doesn't need to be an attack of which you are putting yourself on a side, or as you put it, binary thinking. As written I can't know if you are talking about either the right or left.

                • titzer 3 hours ago

                  I think you want to read my comment a certain way and it's not allowing you to, so you posted both:

                  > it doesn't need to be an attack of which you are putting yourself on a side

                  and also

                  > I can't know if you are talking about either the right or left

                  Which are contradictory, if you think about it. I am not sure what you want me to write if I can't use "they" to refer to other people. Also, I didn't use "we", something you somehow also seem to want me to say, and didn't.

          • NekkoDroid 12 hours ago

            My guess is lack of morals

          • stackbutterflow 12 hours ago

            Because it's easy when you don't let facts block you. Spread lie number 1 on Monday morning, lie number 2 in the afternoon, lie number 3 the next day, and do that for years and decades.

            Whenever someone spends the time, and it takes a long time, to correct you, laugh, mock them, spew a few more lies.

            And it's easy to do when the rich, the owner class side with you, because they buy newspapers, websites, ads, which you can't do if you lean left because acquiring money at all cost is not a priority of left wing people.

      • kalkin 17 hours ago

        I'm curious for your understanding of why Trump won in 2024. If I'm understanding right, you think it was because American voters were rejecting Maoism ("it was called re-education"), to which you think the previous commenter likely subscribes, and which voters associated with Harris/Walz? But I suspect I'm not getting it quite right, and it would be helpful if you would spell out what you mean, rather than just relying on allusion.

        (I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)

        • helaoban 14 hours ago

          I don't want to ascribe any particular political beliefs to the commenter, the quip about re-education was somewhat of a joke given the irony of somebody arguing against dictatorship by invoking mass "deprograming". But many a true word is spoken in jest.

          There are no real Maoists or true communists in the US anymore, at least not enough to constitute meaningful political forces. To the extent they exist they are irrelevant, and one can argue further that no true left remains in the US at all.

          As for my analysis of the Trump phenomenon, I only have intuitions and biases to offer, so caveat lector.

          I don't think it's particularly mysterious. The general perception is that the American left has made identity politics and social justice its main political and social programs, to the detriment of basic governance, most importantly the economy and security, thereby breaking the social contract.

          You cannot be a party that aggressively defends and promotes the interests of minority classes at the expense of the majority without loosing the support of the majority. In some cases, these minorities are so small as to border on the absurd.

          Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size. The same goes for the LGBT population, which represents maybe 10% of the US population (and that's a liberal estimate).

          Try as you might, you cannot escape the cold, hard fact that 60% the US population is white, with something closer to 70% identifying as white or partly white. 90% percent of that group is going to be straight.

          The US middle and working classes still really haven't recovered from the financial crisis of 2008, the aftermath of which precipitated a huge transfer of wealth from these classes to the upper class, a trend that accelerated during the pandemic.

          So you have a majority of the population who are reeling from a devastating loss of wealth, station, and status, unable to keep pace with inflation, watching one of the two main political parties aggressively promote the interests of a tiny minority at their expense, or at least that is the perception.

          Putting aside the nature of the minorities in question, the subservience of the political class to a minority of the population has another name: elitism. The natural response to elitism is populism, which is what we are seeing.

          The protection of minority rights is a noble cause, but it's primarily a civil rights issue, and the focus should be on making sure those classes are treated equally under the law. The goal should not be the elevation of their social and cultural station above the majority.

          Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy. People can rage at me all they want for that description, but that is what the majority of Americans perceive. Again, putting aside any questions of morality, it is political suicide.

          Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration. Again, the perception is that interest of minorities (in this case migrants) are primary to the interests of the majority. In this case the minority are not even American citizens.

          There's a lot more to say on this topic, and I'm sure you can find more persuasive analyses from better sources, but these are some of my intuitions.

          Thanks for coming to my TED Talk.

          1. https://williamsinstitute.law.ucla.edu/publications/trans-ad...

          • kalkin 5 hours ago

            > Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy... Tolerance of mass border crossings was probably a more directly fatal error...

            This is just totally disconnected from policy reality. Biden did not tolerate mass border crossings. (I _wish_ he'd dismantled ICE, but he very clearly did not.) A relatively minor DoE appointment going to a member of an unpopular minority both has nothing to do with policy and is the kind of thing that must necessarily be acceptable if minorities are actually going to be "treated equally under the law". This is a ludicrous basis to infer "the subservience of the political class" to transgender people.

            On the other hand, Trump is a billionaire with Epstein connections and entirely unabashed about making money for his businesses and family using his government position. If this isn't "decadence", or "elitism", what meaning could the words possibly have?

            "Deprogramming" might be an unfriendly word but it's hard for me to imagine how you have a functional democracy when a plurality of voters are making decisions on the basis of straightforward falsehoods, or even inversions of reality, just because "at least that is the perception". This isn't a sustainable situation, and it will end with either re-connecting these people to reality or disenfranchising them (really, them disenfranchising themselves along with the rest of us, e.g. by re-empowering someone who tried to steal an election). The former seems vastly preferable.

            Speaking of unfriendly words - I also broadly have very little sympathy for a demand that people on the left speak respectfully of Trump voters given the total lack of any reciprocation. Even if it is the right way to do politics, the asymmetry between the way Democratic politicians talk about rural areas and the way Republican politicians talk about cities is another thing that's totally unsustainable.

          • Capricorn2481 5 hours ago

            This is a great example of a well put together, level-headed analysis, that I still think misses some key facts about how right wing propaganda works.

            > Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration

            Both Biden and Obama turned away more immigrants than Trump did in his first term. And Clinton is the kind of denying asylum. The idea that we just had completely open borders and nothing was being done about is a fabrication.

            > Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size

            If you actually pay attention to who is talking about Trans people, it is the right. Liberal media may be occasionally baited into arguing about it, but to say it was a major platform is a perception the right crafted. Fox was talking about it 24/7 leading up to the election [1]. Musk and Trump were tweeting about it constantly. They ran political ads saying they wanted to convert your kids to trans ideology. It's gotten so bad that our current president just harasses women that look kinda manly, saying they are trans.

            [1] https://www.yahoo.com/news/fox-news-covers-transgender-issue...

            • thrwawty4 4 hours ago

              If the Democrat leadership weren't going all-in on this ideology despite the demonstrable harms it's causing, the Republicans would have almost nothing to say about it.

              As an example, replacing sex with "gender identity" in prisons policy has inflicted considerable harm on women prisoners, who have been sexually assaulted, raped and impregnated by male prisoners who were transferred to the female prison estate on the basis of their supposed "female gender identity".

              Feminist groups like WoLF spoke up on the horrors of this first, and the Republicans followed when they realized they could capitalize on this politically. But really it shouldn't have happened at all.

      • tristor 3 hours ago

        >You should be asking why 70 million people voted the way they did in spite of the events you describe.

        Propaganda, 1 in 6 Boomers being exposed to amounts of lead in childhood that lead to measurable cognitive declines, average age of the US population being on the rise with lower birth rates means most eligible votes are in the age groups most likely to suffer low grade dementia, and the weaponization of social media by foreign adversaries and wealthy elites.

        There's maybe 4-5M true believers, the rest are gullible lead-addled old fools who got brainwashed by Fox News. That's the unvarnished truth of it.

      • tootie 9 hours ago

        There was no landslide. Trump got 49.9% of the vote. And it was after his attempted insurrection to overturn a valid election in which he was soundly rejected. He's never received 50% of the vote despite his relentless lies about voter fraud.

        I'm not upset at people for having a differing opinion or being upset at some economic conditions attributable to Democrats, but rather their persistent belief in provably false information like the relative danger of immigrants, the causes of climate change, vaccine safety, election security or whether or not a particular ethnic group is eating their pets. This isn't a matter of opinion or it's a matter of observable reality and fundamental human morality.

      • gcbirzan 13 hours ago

        > Trump's landslide victory in 2024.

        What are you talking about?

        • helaoban 13 hours ago

          If you want to challenge a point, then challenge it. Don't cower behind ambiguous snark.

          • titzer 9 hours ago

            It wasn't a landslide.

            It's on you to argue it was, e.g. by comparing it to other clear landslide victories like Reagan in 1984. Truth is that 2024 the final popular vote gap was 1.5%, compared to 4.5% for 2020, -2.0% for 2016 (yeah, really), 3.9% in 2012, 7.28% in 2008, and so on.

  • JackYoustra 5 hours ago

    I'm sorry I read this a lot and this is kind of an insane thing to say? Classified OLC memos giving legal cover to any military action has been a fixture for the last over twenty years! Congress never abdicated power, it just, by the nature of the constitution, practically has SO much less power than the president! The president is a single person that people elect, they expect the person to be a leader, and congress will always, always play a following role so long as the president has unilateral power over the military, is directly elected, and just in general has expansive interpreting authority over laws.

    You know who doesn't have as much power? The swiss head of state, so weak you can't even reliably name them! THATS what it looks like to defeat personalization, not some hand wringing hoping a system does something that it wasn't designed to do.

  • vonneumannstan 8 hours ago

    This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.

  • dartharva 19 hours ago

    > The military should be reigned in at the legislative level, by constraining what it can and cannot do under law.

    Is there an example of such a system existing successfully in any other country of the world that has a standing army?

    • helaoban 17 hours ago

      I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.

  • einpoklum 9 hours ago

    > Congress having thoroughly abdicated its powers to the executive.

    Good thing the US is led by such figures as Donald Trump or Joseph Biden, stalwart trustworthy men with their hands firmly on the wheel.</sarcasm>

jjcm a day ago

This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

  • panarky 20 hours ago

    Does the Defense Production Act force employees to continue working at Anthropic?

    • nerdsniper 19 hours ago

      No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.

      Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).

      If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.

      It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.

      • pnt12 14 hours ago

        The thesis could get an F at law school, but it is not guaranteed that the government will act lawfully. Its useful to think about what the administration can do, legal or not, especially when given little challenge when acting illegally.

      • fluidcruft 19 hours ago

        Maybe Anthropic could replace its employees with AI. Unlikely the admin is going to enjoy setting precedent that employees are protected against being replaced by AI.

    • SilverElfin 19 hours ago

      [flagged]

      • zombot 9 hours ago

        > fake wars

        Once a war has started, it won't be fake any more.

        > they’ll definitely declare wars to extend the presidency.

        You don't exchange the Fraudster in Chief while at war, so they do want a war. Any war. But I have the strange impression that von Clownstick doesn't want to be seen as having started it by himself.

      • deadbabe 19 hours ago

        Presidency can’t be extended by wars.

        • jaegrqualm 19 hours ago

          FDR's tenure might have created an amendment to that effect, but it's not like this administration hasn't used a legal loophole before.

          Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...

        • PontifexMinimus 19 hours ago

          Not constitutionally, at any rate.

          • SlightlyLeftPad 19 hours ago

            What would happen if he tried by not vacating at the end of his term, when challenged in court, shut down by his own Supreme Court? I mean let’s be real, all it really takes is him not giving up the white house. I sometimes wonder.

            • goatlover 19 hours ago

              Steve Bannon advised Trump to do this in 2020. Question is what would the Secret Service and Pentagon do once the election is certified for the winning candidate? If their loyalty remains to the Constitution, Trump would be forcibly removed.

            • krapp 19 hours ago

              We went through this when it looked like he might not leave last time. What happens is the Marines show up and politely throw his ass to the curb.

              You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?

              • wildzzz 18 hours ago

                The military is the one drone striking boats in the Caribbean. The military invaded a foreign country we are not at war with to kidnap its leader. The military dropped bombs on a foreign country we are not at war with. The military is patrolling the streets of DC and other cities. The military is the one spending the money on new immigrant detention centers. I fail to see how they are standing up to Trump's illegal acts. I'm not 100% sure the White House Marines will just throw Trump to the curb if Congress manages to certify the election in favor of someone else.

                • krapp 8 hours ago

                  The military drone striked civilians in Obama's day, they did Abu Ghraib and Agent Orange and countless other war crimes. But aiding a President in a coup would be beyond the pale. Maybe I'm being naive, but I do think a lot of soldiers would refuse to do that even if they could contextualize and compartmentalize everything else.

                • deadbabe 5 hours ago

                  Those are things the military wanted to do anyway, Trump just enabled them.

                  But violating the constitution with such a blatant power grab, and thus throwing the future of the United States and its military into uncertainty, is probably not something they want. Better to just force Trump out and maintain the status quo of new presidents every 4-8 years.

        • vlovich123 19 hours ago

          … not yet. The problem with a norm breaking presidency like Trump’s and the GOP power structure is that no norm is safe, including elections.

        • NullPrefix 19 hours ago

          Zelensky's presidency was supposed to end couple of years ago. Would it be different in USA?

          • Tostino 18 hours ago

            Different constitutions. Were you trying to muddy the waters, or are you just ignorant of the details?

  • JumpCrisscross 20 hours ago

    > this is a strong arm by the governemnt to allow any use

    It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.

    • altacc 13 hours ago

      Trump/Miller/whomever don't need to be actively involved in every decision. They have defined an approach to strong arm problem solving and weaponisation of the government that anyone that works for them is implicitly allowed to use. The supposed controls that were meant to prevent this have crumbled or aligned.

      • JumpCrisscross 7 hours ago

        > They have defined an approach to strong arm problem solving and weaponisation of the government that anyone that works for them is implicitly allowed to use

        And one of the few constraints in their approach is not to fuck with the Dow. Expropriating Anthropic’s IP would trash the AI sector, and by extension, the Dow. (Even designating it a supply-chain risk sets a material precedent that a future administration could use against OpenAI and xAI.)

        Hegseth is bluffing on his most destructive fronts, even if he doesn’t know it.

    • Quarrelsome 12 hours ago

      flippant? Its aggressive, belligerent and entitled. I'm not seeing "flippant". Unless this is some sort of weasely "oh we only threatened them a bit" bullshit. This is about entitled pricks in government who consider their temporary democratic mandate as a carte blanche for absolutism.

    • cmrdporcupine 20 hours ago

      It definitely has the aroma of either Bannon or Miller or both.

      • 0xDEAFBEAD 19 hours ago

        Believe it or not Steve Bannon is quite concerned about AI development:

        >Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."

        >...

        >"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.

        https://abcnews.com/US/inside-magas-growing-fight-stop-trump...

        • cmrdporcupine 8 hours ago

          Him being "concerned" about it doesn't mean he doesn't want to bring Anthropic to heal.

    • xpe 19 hours ago

      > It’s a flippant move by Hegseth.

      Care to convert this into a prediction?: are you predicting Hegseth will back down?

      > I doubt anyone at the Pentagon is pushing for this.

      ... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?

      One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.

      • JumpCrisscross 17 hours ago

        > are you predicting Hegseth will back down?

        I think he may be able to cancel Anthropic’s contract. But no more. He won’t back down as much as be overruled.

        > As SecDef/SecWar, Hegseth is the head of the Pentagon

        On paper. Also, being the de jure head of something doesn’t automatically mean you speak for it as a whole.

        > while also taking his power seriously

        Authority and power are different. A plane pilot has a lot of authority. They don’t have a lot of power.

        • blitzar 9 hours ago

          > I think he may be able to cancel Anthropic’s contract.

          This outcome might be a win for everyone involved, the time and effort for those billions with a lot of strings attached are less useful as Ai matures.

        • xpe 9 hours ago

          The above is fairly surface level. See my other comment for particulars that matter a lot: https://news.ycombinator.com/item?id=47176361

          You’ll notice I’m trying to avoid debating generic phrases and terms such as “power” that probably won’t advance mutual understanding of this situation. I’m talking about specific actions and systems. It makes it clearer.

          • JumpCrisscross 8 hours ago

            > notice I’m trying to avoid debating generic phrases

            You’re missing the forest for the trees. Take the tariffs as analogy. Specifying the laws invoked to effect the tariffs is more precise, but less complete than describing Trump, Bessent and Navarro’s motivations and theories.

            Same here. We can wax lyrical about the DPA and specific statutory authorities and how they may be litigated. Or we can look at the actual power structures. The former is precise but inaccurate. The latter is the actual dynamic.

            > terms such as “power” that probably won’t advance mutual understanding

            If terms like power and influence don’t make sense to someone, they’re going to be lost in any political discussion. But particularly under this administration.

            There aren’t legal analytic fundamentals driving why Trump hates windmills or Biden pardoned his son, these were expressions of Presidential power and preference. The legality was ex post facto.

            • xpe 7 hours ago

              Person to person, we’re talking past each other. If we were sitting down face-to-face or even with a video call, this would be a totally different conversation.

              How much are we connecting in this particular conversation? What if each of us were to step back and ask 3 questions: What am I trying to communicate? Are we both interested in having this conversation? Are we both learning from it?

              Again, this is not meant as a criticism of you. It is a statement of the dynamic here, and how we’re relating. (Even though HN is well above average, it has massive failure modes when you view it from a systems POV.)

              My feeling is that you aren’t responding to the intent behind my statement. But I’ll also recognize that I’m probably not communicating that lands for you. Maybe you feel the same in reverse? That would be my guess.

              This as a failure of our communication norms and technologies. Given we’re in the year 2026 and have minimal technical barriers, we have very much failed culturally to get anywhere close to the potential of the Internet or whatever needs to come next.

              • JumpCrisscross 7 hours ago

                Genuine question, are you using AI to edit your comments? Going on a rhetorical side quest in a straightforward discussion about policy, law and politics is…well, it’s not on topic.

                For what it’s worth, I’m not seeing a failure of communication. I’m seeing a failure of scoping. You’re arguing on the basis of specific legal mechanisms by which power is expressed. I’m arguing the real motivations of and political constraints on decision makers are more fundamental in this case.

                That isn’t universally true. Power predicted what Trump would do with tariffs (again, analogy). Legal analysis predicted his constraints (which SCOTUS affirmed). In this case, SecDef has the legal authority to do what’s described. He doesn’t, however, have the political freedom to do so. That turns the latter into the germane constraint, not a litany of proscribed powers.

                Put another way, the people—here—are fundamental. (Market reactions, too, though again largely because the people in this administration have chosen the Dow as a lighthouse.) The legal justifications are worse than surface level, they’re ex post facto findings of retaliatory paths. It may feel more substantial to quote DPA statute versus discuss Hegseth and Dario’s motivations and relationships, but that’s, again, missing the forest for the trees.

                • xpe 43 minutes ago

                  It takes two to tango. I bowed out nicely and put in a good faith effort to communicate why. Maybe on a different day in a different forum, we could have a useful conversation for both of us. I would look forward to that.

                • relaxing 6 hours ago

                  [flagged]

                  • dang 38 minutes ago

                    Please don't cross into being a jerk. Posts like this one and https://news.ycombinator.com/item?id=47175955 are the kind of thing we ban accounts for, regardless of how right you are or feel you are.

                    It's true that there's a lot of grey area and turbulence right now around which HN posts have been LLM-generated or LLM-edited, and it's compounded by the fact that there's no way to tell for sure. We all have to find our way through this—both the community and the mods. But we can and need to do so without breaking HN's rules ourselves in the process.

      • tz1490 19 hours ago

        It matters because the whole media is selling this as a Pentagon initiative, while probably 75% in the Pentagon think this is snake oil just like the previous Microsoft VR goggles.

        If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.

        • xpe 18 hours ago

          > If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028.

          While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:

          - contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.

          - supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.

          - Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.

          My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.

          Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.

  • mandeepj 19 hours ago

    First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).

    > Mass domestic surveillance.

    Since when has DoD started getting involved with the internal affairs of the country?

    https://en.wikipedia.org/wiki/United_States_Department_of_De...

    • _kst_ 18 hours ago

      The Senate??

      Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.

      • mandeepj 17 hours ago

        Right! I meant to write ‘Congress’, but mistakenly wrote Senate.

    • Lerc 19 hours ago

      It's whatever what the people who have the power want to call it. What is written on a piece of paper is irrelevant if it is not acted upon.

      If the rename gets struck down then they don't have the power. If it doesn't they have the power.

      There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.

      Until they did it anyway.

      • jazzyjackson 16 hours ago

        I don’t know, to me it seems like their MO to make an announcement and not follow up on it. All the paperwork still says DOD, all the contracts are with DOD, there is no legal entity called DoW

      • darkerside 18 hours ago

        This is fascism

        • Lerc 18 hours ago

          I don't think many are doubting that. I'm not talking about the way things should be. I'm talking about the way they are.

          • darkerside 11 hours ago

            This is normalization of fascism

            • zombot 9 hours ago

              Which is what naturally happens when fascists are in power.

    • Quarrelsome 12 hours ago

      I'd imagine the pentagon are more interested in the autonomous kill bot part than the surveillance part.

    • khazhoux 14 hours ago

      Well, Trump renamed it, and since Congress is now a subsidiary of the Executive Branch, it's the Department of War.

      • zombot 9 hours ago

        Resist. Continue calling it the DoD.

    • culi 19 hours ago

      They've already spent millions on the name change. It's also the original name of the department. IMO it's a more honest name

      • 9dev 7 hours ago

        It doesn't matter how much they've spent, nor what you think. Renaming it requires congressional approval, which they have not gotten.

    • tokyobreakfast 19 hours ago

      www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.

      The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.

      • 9dev 7 hours ago

        > It's very important to realize there is literally nothing you—or anyone else—can do about this.

        What an utterly bewildering statement. So your suggestion is to suck it up, because we're all impotent anyway? The only thing that can bring authoritarian systems down is civil resistance.

  • intermerda a day ago

    [flagged]

    • grosswait 20 hours ago

      [flagged]

      • djeastm 19 hours ago

        >It’s already close to losing all meaning.

        On the contrary, seeing it take hold before our very eyes gives it more meaning than it ever had in the pages of the history books.

        • grosswait 8 hours ago

          On the contrary, claiming it’s taking hold and labeling everything fascism doesn’t make it so

      • xpe 19 hours ago

        There is a difference between a politician making a contradictory statement and the largest agency in the United States using probably unconstitutional pressure tactics against a business.

    • SilverElfin 19 hours ago

      I see this a lot on the immigration topic. They’re simultaneously too rich and taking over everything, but also low paid slave labor displacing white Christians everywhere.

  • calvinmorrison 21 hours ago

    More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.

    • fwipsy 20 hours ago

      Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.

      "The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."

      • polski-g 8 hours ago

        Because of Bernstein v DOJ, any AI company in the 9th circuit cannot be regulated because software is considered free speech.

    • toomuchtodo 21 hours ago

      Note that they always attempt to exert control they don’t have. They’re always bluffing, and they keep losing. Respond accordingly.

      • gclawes 20 hours ago

        The government should be entitled to any lawful use of a product they purchase, not uses dictated solely by the provider. It's up to courts to decide what lawful use is, it's not up to these companies to dictate.

        • mediaman 20 hours ago

          The product is a service, and they agreed to a contract. Now they don't like the contract.

          Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?

          If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?

          • danorama 19 hours ago

            Seriously.

            Hegseth trying to play “I’m altering the deal. Pray I don’t alter it any further” just shows this gang’s total lack of comprehension of second-order effects.

        • isodev 20 hours ago

          > It's up to courts to decide what lawful use is

          No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.

          The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.

          • grosswait 20 hours ago

            Do you have any insight that what they want to do is YOLO, as opposed something your sure you’ll disagree with?

            • isodev 20 hours ago

              YOLO here refers to unsafe usage of LLMs. Your government is supposed to make legislation that protects all of its citizens, it’s not “what you agree with” game.

              • grosswait 6 hours ago

                Yeah, I knew what was meant. Unsafe being a moving definition by an arbitrary set of people.

        • mech422 20 hours ago

          Terms of Service would like to have a word....

          Not like limiting uses of products is anything new

        • rpdillon 20 hours ago

          Not really. Services are provided on terms acceptable to both parties. This isn't about what's legal, it's about the terms of the service agreement.

        • toomuchtodo 20 hours ago

          Providers are free who they choose to do business with, or not do business with. Are you arguing that the government should be able to compel a provider to allow their use when it’s well documented the government does not respect nor adhere to the rule of law? I think you misunderstand commerce and contract law.

          • alex43578 16 hours ago

            Providers are bound by plenty of laws that alter how they do business or who they do business with.

            You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.

            • toomuchtodo 7 hours ago

              When Congress makes the law, you will be accurate. At this time, there is no law that enables the US executive branch to achieve their desired outcome of strong arming Anthropic.

              > Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.

              Your average American is low functioning, low education, vibe driven with a 6th-8th grade reading level, so this ("What Americans think") is not terribly relevant in my opinion. Provide statute and case law.

        • bdangubic 20 hours ago

          Amazing to read this. Hoping you are not an American… Reading this thread is like comrade after comrade!

  • egorfine 10 hours ago

    > two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

    They are only contradictory if you think about it.

  • gclawes 20 hours ago

    > This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use.

    Why the hell should companies get to dictate on their own to the government how their product is used?

    • theptip 20 hours ago

      Every company is free to determine its terms of use. If USG doesn’t like them they should sign a contract with someone else.

      • grosswait 20 hours ago

        Every company is free to state their terms of use, but not all have been upheld when challenged

        • otterley 19 hours ago

          What’s your angle here? I’m genuinely curious. If the government told you that you had to muck out portable bathrooms with your bare hands even if you didn’t want to, wouldn’t you find that objectionable?

          • alex43578 14 hours ago

            I’m sure they would find it objectionable, just like how many reacted negatively to the draft, but it was imposed anyways.

            The government should have far less control and power over individuals and businesses than it currently does.

          • lynx97 12 hours ago

            Well, the rates are different from country to country, but everyone knows taxes. I really don't want to give away almost 40% of my income... Does anyone care what I want or like?

            • otterley 6 hours ago

              Taxes aren't forced labor or indentured servitude, and aren't prohibited in any democracies. They're imposed by law through the actions of our duly elected representatives.

        • theptip 6 hours ago

          What grounds for challenge do you imagine here?

      • blitzar 14 hours ago

        > Every company *

        * excludes tiktok

      • alex43578 16 hours ago

        Can I run a business and say “No use by insert race here”? If they don’t like it, they can shop somewhere else, right?

        • theptip 6 hours ago

          Of course not, nor can you write a contract that places your customers in indentured servitude. Those would be illegal contractual terms.

          But this is irrelevant to the case we are discussing, where Anthropic used legal contractual terms, and the government willingly signed them, then demanded they be changed after the fact.

        • FrancisMoodie 14 hours ago

          Ofcourse we're gonna compare being against the use of technology for Mass surveillance/Autonomous weapons with being racist, like wtf kind of argument is this? So because businesses can't implement racist policies they shouldn't be allowed to have any policies concerning the use of their tech? Mindblowing.

          • lynx97 12 hours ago

            Well, the question is the fine line between racism and discrimination. Or, whats the difference between misogyny and pacifism? What am I allowed to dislike? Is it already across the line if I dont like dogs? What if I had really bad experiences with dogs in the past? Is it OK now, or still not? What if my childhood was basically a crazy mess because of my mother? Am I allowed to be careful around women now? Or am I creepy because of that? What if I escaped a warzone during my childhood? Is militant pacifism OK now? What if the military saved my family from being killed? Is it OK if I am pro military budget, or am I a system-whore now?

        • tenuousemphasis 15 hours ago

          Kegsbreath isn't a protected class.

          • alex43578 14 hours ago

            If your argument is “every company is free to determine its terms of use”, except when told otherwise by the government, you’ve proven my point. The government is saying they need to provide unfettered access.

            • JCharante 13 hours ago

              “Told” is different than it being written into law. Go update the laws first and then you have a valid argument

              • alex43578 3 hours ago

                So they'll be able to use the already-written DPA, right?

                • theptip 2 hours ago

                  They can try, but:

                  1) it’s pretty transparently obvious that Anthropic is not a supply chain risk, and that this is a retaliatory gesture. So I don’t support that usage.

                  2) if they do try, Congress or SCOTUS could well reduce or remove that authority. I give the Trump admin enough credit to assume that they are considering carefully which laws they spend in this way, DPA is a valuable chip they may need to spend for something more valuable than Hegseth’s temper tantrum.

    • randerson 19 hours ago

      Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?

    • Hnrobert42 20 hours ago

      Because the government is here to serve us. Not the other way around.

      • no-dr-onboard 20 hours ago

        The government has a responsibility to protect its constituents. Sometimes that requires collaboration. This isn’t hard.

        • epistasis 20 hours ago

          Is this one of those times? Seems pretty clear it's not.

          The third amendment is there for a reason. I am a third amendment absolutist and willing to put my life on the line to defend it.

        • staticassertion 19 hours ago

          I wonder what you can't justify this way.

          • no-dr-onboard 18 hours ago

            That’s a good question. Assuming a righteous and just government:

            The government couldn’t justify the killing of innocent civilians.

            The government couldn’t justify the killing of the unborn.

            The government couldn’t justify eugenics.

            There are objective moral absolutes.

            • staticassertion 10 hours ago

              Wow, that's just so many assertions and none of them follow from the statement that the government can break the law in order to protect its citizens. In all of those cases I can just say "they can if it is to protect its citizens". Remember, the premise here is that you are performing the act in order to protect constituents. So before all of those statements you have to assume "They are doing this in the genuine believe that it protects constituents".

              The argument so far seems to be "They can do anything, but there are moral absolutes that I can personally list out, and in those cases they can't do those things". That is a hilariously stupid view of the world but sadly a common one.

              Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.

              • no-dr-onboard 5 hours ago

                I normally don't respond to bad faith responses like this, but I found the following quote pretty funny:

                > Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.

                This is a silly and self refuting statement.

                • staticassertion 3 hours ago

                  > This is a silly and self refuting statement.

                  No it isn't and it's a pretty standard argument.

                  Other than insulting you, my response was pretty damn charitable tbh. I tried to state your argument for you as best I could.

    • singleshot_ 20 hours ago

      Same reason they cant quarter troops in your house: the law

    • throw0101c 20 hours ago

      > Why the hell should companies get to dictate on their own to the government how their product is used?

      Well:

      """

      Imagine that you created an LLC, and that you are the sole owner and employee.

      One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"

      There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.

      """

      * https://x.com/deanwball/status/2027143691241197638

      • grosswait 19 hours ago

        This is a terrible analogy. Imagine you’re an LLC that signed a contract to mine minerals, but your terms state you’d only mine in areas you felt safe. OSHA says it’s safe but you disagree, because….. any number of reason unknowable to an outsider. Maybe you just don’t like this OSHA leadership. That is more like what is happening.

        Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.

        I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.

        • otterley 19 hours ago

          The (hypothetical) contract is clear, though. The condition is stated in objective terms: “in areas you felt safe.” If the Government agrees to this, then they should be bound just like any private counterparty would. If the Government didn’t agree to this, they should have negotiated that term out in favor of their preferred terms.

          • grosswait 8 hours ago

            I agree. Which is why I said signing a contract with anthropic was a terrible idea in the first place.

        • WD-42 19 hours ago

          Is it a rug pull? Where in the terms of service does anthropic say their models can be used for autonomous weapons and mass domestic surveillance?

  • quietbritishjim 21 hours ago

    Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.

    • NewsaHackO 21 hours ago

      No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?

      • snickerbockers 20 hours ago

        The OP specifically mentions this in the context of "systems" (a vague, poorly-defined term) and "classified networks" in which Anthropic products are already present. Without more details on what "systems" these are or the terms of the contracts under which these were produced it's difficult to make a definitive judgement, but broadly speaking it's not a good thing if the government is relying on a product which Anthropic has designed to arbitrarily refuse orders by its own judgement.

        I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.

        • NewsaHackO 18 hours ago

          >I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.

          I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.

          • snickerbockers 16 hours ago

            >limit Anthropic's business opportunities with other companies

            Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.

            • SpicyLemonZest 8 hours ago

              The question is, after witnessing Hegseth crash out against one of their fellow contractors over practically nothing, will contractors want to walk the tightrope of doing business with Anthropic but promising it never ends up feeding into a government contract?

    • estearum 21 hours ago

      It's easy to resolve an alleged contradiction by just ignoring one half of it lol

      Try introducing DPA invocation into your analogy and let's see where it goes!

      • simoncion 14 hours ago

        > Try introducing DPA invocation into your analogy and let's see where it goes!

        When I introduce that, I see Anthropic's management getting Tiktok'ed.

        It can be true that Anthropic's products are essential for national defense and also true that the management of the company are a supply chain risk.

        Is any of that true? Well, so much of what has been done in the name of "national defense" & etc over the past many decades has clearly not been done for reasons that are true, so -when it comes to "national defense"- I don't think that the truth actually matters much at all.

        • estearum 10 hours ago

          TikTok'd as in requiring a novel act of Congress? Sure!

          DPA and FASCSA as they stand today cannot be used the way DOD is claiming they can be.

    • gipp 21 hours ago

      "Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.

    • ray_v 21 hours ago

      The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.

tabbott a day ago

An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

  • idiotsecant a day ago

    The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.

    • _def 17 hours ago

      On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.

    • freakynit 20 hours ago

      I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.

      Power corrupts, and absolute power corrupts absolutely.

flumpcakes a day ago

This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

  • davidw a day ago

    This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.

    • inigyou a day ago

      Some people are calling it the "American century of humiliation"

      No other country that went through a phase like this has ever recovered. Not even in a century.

      • davidw a day ago

        I won't give in to doomerism.

        Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.

        • mobilefriendly a day ago

          All three have active US military bases on their soil and enjoy the economic surplus of living under the US defense umbrella.

          • davidw a day ago

            The post WWII system was imperfect in many ways, but it was also mutually beneficial and worked out pretty well despite the problems.

            And we're throwing that all out the window.

            US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.

          • Quarrelsome 12 hours ago

            don't make it out like its a favour. The US have done very well out of their defense umbrella ensuring its global dominance for most of last century.

            Most powers have to pay in blood to do what they want geo politically without question. The US inherited a global state where many potential rivals were weak and helped keep them weak. It was a cost worth paying and its a shame that current US leaders are so cheap and foolhardy to not see what they're throwing away.

          • matwood 15 hours ago

            You seem to imply the US reaps no benefit from providing security?

          • bonsai_spool a day ago

            Britain essentially ceded its bases to the US at the end of WWII - these things aren’t as durable as they may seem.

            • Quarrelsome 12 hours ago

              that's cos WW1 financially broke Britain, then WW2 happened.

          • 4gotunameagain 16 hours ago

            All that economic surplus - and much more - flows back to the US. How do you think the US can sustain that amount of USD printing without inflation ? The rest of the world is buying those dollars.

        • remarkEon 16 hours ago

          Germany: functionally paralyzed government that has the far right knocking at the door because the fractured coalition of left-centerleft-centerright continues to refuse to do what voters ask for.

          Italy: Nominally center-right government, similar problems as Germany, less the energy issues

          Japan: just elected a landslide right wing government that is going to change the constitution so they can build an offensive military again

          Curious.

          • poly2it 15 hours ago

            I don't perceive those problems to be inherent to the territories or peoples of the countries. All have had potential to change and have done so extensively since the Second World War. There isn't a universal explanation or root behind the issues these countries are facing today, unless you are willing to abstract it to just "economics".

        • micromacrofoot a day ago

          They got bombed to shit first

          • davidw a day ago

            It'd be nice to avoid that part.

            • Fischgericht a day ago

              Then it won't work. The current iteration of Germany is fully based on having been bombed to get a fresh start. If you already have something, you won't change it. If you have to re-build, you will implement improvements. No bombs, no reset, no joy.

              • RGamma 12 hours ago

                It is not inevitable that you come back improved. It is not inevitable that you come back at all.

              • scottyah a day ago

                Ok what about the Netherlands, Spain, Nordic countries?

                • Fischgericht 21 hours ago

                  Very different countries.

                  The Netherlands for example got their last reset by completely losing the Dutch empire.

                  Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.

                  If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.

                  But in the end: It's an iterative process. Which means: There must be iterations.

                  • davidw 21 hours ago

                    This sounds about as scientific as phrenology.

                    • Fischgericht 10 hours ago

                      No, it's really simple: Programming, Math, AI, blabla - those are all abstractions of what we have seen in nature.

                      Once you have understood that, you can just apply the rules learned backward, and they will typically match pretty well. I can buy fractal veggies in a supermarket.

                      And also, it's just data. Just take some random samples. That even civilizations like the Mayas who have faaaar more time on the clock than say than the US had multiple full resets.

                      Another random sample I've just pulled out of thin google air: San Francisco Fire of 1851. Everybody knew that wood burns. And that wooden buildings burn. And that wooden cities burn. Did anyone decide to tear down their house and re-build with a different material? No. This happened after everything had burned down to the ground. That was the reset needed.

                      I think it is very clearly an iterative process. Have a look.

                      • mrguyorama 3 hours ago

                        >And also, it's just data. Just take some random samples.

                        You are not at all working with "data" or "samples". You are just making arguments and supporting them with examples. That's not science, that's philosophy or persuasive essay writing.

                        You are generalizing those arguments in insane ways. Just like the worst philosophy. You are drawing conclusions from extremely weak claims that don't even map to reality in the first place.

                        You can't say "Math works to describe the head of broccoli so I can just think hard enough and understand geopolitics". That's emphatically not science.

                  • prmph 12 hours ago

                    Not sure why you are being downvoted. What you are saying has a lot of truth to it. It is directly observable in the history of nations.

                    Germany has to be forced to accept that, although it was advanced, it could not have the European empire it thought it deserved. Japan had to learn a similar lesson. The speed and horror of the reset was in direct proportion to the potential for advancement and high society in these nations.

                    Ghana, where I come form, for example, has not has to experience any massive upheaval even from its pre-colonial and colonial days up till now. Our society is laid-back, and moves slowly. Even many other African countries have had to have their national reckoning in the form of civil wars and other huge upheavals in order to settle into a viable way of existing and advancing.

                    And, like you said, this is iterative. Given the nature of people in a nation and its fundamental geopolitical position, the same question will need to be answered after every N generations. Germany is central to Europe, and already a generation that is far removed from the world wars are starting to rethink why it shouldn't assert itself more strongly. Same in Japan.

                    THe way to analyze the iterations of the US is to understand that the primary threats are from within. It may not implode complete, but civil war and the civil rights era show that the potential is there for massive unrest and violence.

                    • Fischgericht 10 hours ago

                      [I am getting downvoted all the time because the combination of German directness with autistic directness and lack of empathy combined with dark humor is not exactly compatible with societies where it is seen as offensive, rude or even aggressive not to sugar coat your messages. If one side treats this as a data exchange, and the other side processes the data but including emotions it will obviously have compatibility issues. But that's my "problem", so I accepted that typically if I post stuff, I first get upvoted massively, and after a day downvoted to hell. And that's OK. Again, my problem to be incompatible with a standard.]

                      And yes, it is interesting to see that on Polymarket people are betting involving a lot of emotions. No, you will not bet on getting killed by masked militia. Nobody is going to say "Hey, I'll bet $1000 that I will get cancer soon!".

                      But if you leave aside all the emotions, and just look at the data: No, there is no realistic scenario the US could magically recover from all checks and balances and rules and laws and regulations and decency having been destroyed. Competence, leadership and shared knowledge had been erased in all areas of society - Science, Development, Capitalism, Arts. How are you going to rebuild all of this, especially if the best case is that 60% of the people will agree to rebuild, while 40% insist they need to keep destroying stuff?

                      This is not a scenario looking at historical data any prior "high culture" (or whatever to call this) had been able to recover from.

                      Elsewhere in this thread is was mentioned that Germany still had all the Nazis in place everywhere because else the country would not have worked. But that is not the point. The reset was:

                      a) All is destroyed and MUST be rebuild because else we will freeze and starve to death.

                      b) Your Nazi neighbor is still there, but it has been made VERY clear who is the new sheriff in town: First the allies, but then pretty much the USA. Germany is still paying for having US solders in the country, providing valuable expensive land for free, and paying for most of the supply chain that is not staffed with US soldiers. And that is the accepted normal.

                      c) What was left on industry was physically taken as reoperations. Especially the soviets, but also the French did dismantle hole factories and machinery, moving that to their own countries (rightfully so.)

                      From what I know from school, reading and talking to grandparents: Germany before WW2 doesn't have much relation to pre-WW2 Germany. Suddenly it was normal that women can to "men's jobs" (due to those being more on the dead side). McDonalds. Hollywood. etc

                      It really makes sense to have a look at a couple of pictures of what was left of Germany after WW2. It's just someone slapping an existing brand name onto a new product. And in this case, personally I would have regarded the brand as damaged and would have picked a different name.

              • davidw a day ago

                I am less confident about my predictions for an uncertain future. There's all kinds of ways different things could go.

                I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.

                • Fischgericht a day ago

                  Yes, but it is actually scientifically correct and proven on all sorts of layers. Biology, Maths, whatever. Not doomsdaying, just data analytics.

                  Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.

                  Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...

                  Sadly the USA will also need a reset before things can begin getting better again.

                  ) I was born in Germany and lived there for 40 years.

                  • RGamma 12 hours ago

                    References to scientific proofs?

              • eternauta3k 12 hours ago

                Germany wasn't a fresh start. The de-nazification ended up being a bit of a joke and (AFAIK) the first governments were full of ex-Nazis.

            • protocolture a day ago

              James May did a documentary loosely based on this. "The Peoples Car"

              Basically analysing the economies of WW2 participants via their automobile industries.

              Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.

            • galangalalgol a day ago

              I don't think it would matter even if the us did have to start again. The entire us alliance after ww2 benefited from the same structural causes of increased pluralism and egalitarianism. A fractured elite, complex international trade, expanding and increasingly difficult to control communication channels, and a growing bureaucracy. These all inhibit autocratic concentration of power. International trade became uncomplicated, there is one manufacturer that is not a consumer, and many consumers. This leads to an increasingly less fractured elite. The structural reasons for democracy and rules based order are all fading. The us is just a really big canary.

            • King-Aaron a day ago

              The people running the show are all building generational fallout shelters in new zealand. As seems to be the real 'whitehouse ballroom' plan too. They seem to be expecting that part.

            • pear01 20 hours ago

              Congress is the problem, but not in the way most describe.

              Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.

              This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.

              This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.

              The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.

              If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.

              • soderfoo 12 hours ago

                How do you think this would play out? Changing the apportionment of the Senate, aside from being a political and legal nightmare, would also create monumental constitutional crisis.

                First, the Connecticut Compromise is a democratic underpinning of the US. It was central to the formation of the nation, and any attempt to alter it would be a foundational structural change to the constitution to say the least.

                I understand the concerns about one generation binding another without recourse. Legal scholars differ on whether Article V, which implements the compromise, can be amended or not.

                But for the sake of argument, let's say it can. It would be an insurmountable task requiring the following:

                1. A supermajority in both houses of Congress (67% in the Senate and 66% in the House) to propose the amendment.

                2. Ratification by three-fourths of the state legislatures (38 out of 50 states) or by conventions in three-fourths of the states.

                3. Consent of the states that would lose their equal representation in the Senate.

                4. Overcome any legal challenges that would likely arise at every step of the process.

                The result would be a dramatic redefinition of federalism and democratic representation. This wouldn't be a cosmetic change, it would be a fundamental alteration to the structure of the government and constitution.

                Very few things were deemed "unamendable" and entrenched in the constitution before, both explicitly and implicitly, but now it would all be up for grabs. Now nothing is irrevocable.

                What's to stop future generations from altering other fundamental principles? While we may complain of being bound by the decisions of our ancestors, we would be opening up a Pandora's box of constitutional instability for future generations, binding them to the whims of a (slim?) majority of the current generation's political agenda.

                I think that is the best case scenario. The worst, and I think a very possible scenario, is that states losing representation would claim that such a drastic and material change to the constitution upends the root of the bargain that led to the formation of the union, and would likely seek to secede. You may have achieved your goal of changing the apportionment of the Senate, but at the cost of the union itself. There are far easier and less risky ways to achieve political change.

                • matthewdgreen 7 hours ago

                  We could add new states. For example, Washington DC has 702,000 people with zero Congressional representation, and they're currently occupied by Federal troops without any voting recourse. If they were made a state, they'd be bigger than Wyoming and Vermont. Puerto Rico is also a US territory with 3.2 million people and zero Congressional representation. As a state it would be larger than 20 existing states. This doesn't "fix" the problem but it does ensure that more U.S. citizens gain access to representation in Congress, while also shifting power to more densely-populated areas.

                  • soderfoo 6 hours ago

                    True. I'm not as familiar with the politics of DC, but my limited understanding of the PR statehood situation is that the GOP is unlikely to approve what would presumably be 2 new safe democratic seats in the senate.

                    If I remember correctly, the governor of PR would appoint the first 2 senators. A tactic could be to promise to appoint 1 republican senator as an enrichment to approve statehood. It's a real shit situation.

                    There are more Puerto Ricans living in NYC and Orlando than in PR. I'd like to visit before the little family I have left there leaves or dies out.

        • inigyou a day ago

          [flagged]

          • popalchemist a day ago

            Japan's economics are mostly rooted in population issues. Have you ever been? Even though wages are stagnant, the people are among the healthiest in the world and they're known for the way their society's public services ACTUALLY work.

            Not sure about Italy, but Germany, while not without its problems, is a beacon of democracy, progressivism, and self-correction.

          • lovich a day ago

            > Germany is still extremely weird about anything to do with Jews

            > I've never been to Italy but they don't seem very productive either.

            Ok green poster. You need to look up more about world economies if you are going to confidently say things like Italy isn’t that productive. Combined with your comment on Jews in Germany I just assume you’re here to push propaganda, but if not please read up more on Italian economic output compared to, I don’t know, maybe the G7 countries?

      • Dumblydorr a day ago

        That’s just historically inaccurate. You had massive upheavals across numerous countries throughout time, this is small in comparison to the civil war’s impact on the USA for instance. You think this is worse than half the government rebelling and revolting and killing an amount of young men that today would be equivalent to 6 million deaths? It’s bad now but your comment lacks historical evidence.

      • jonplackett a day ago

        China seems to have recovered pretty well.

        • AuthAuth a day ago

          Not really. China only seems good because there is a war in Europe and the US is shooting themself in the foot. They're polluting and strip mining their country, suppressing wages and funneling the profit into companies all while increasing surveillance and decreasing freedom of opinion. Oh but they put down a few solar panels and then paid for people to write articles about it.

          • davidw a day ago

            Their economy lifted a bunch of people out of poverty. That's positive.

            However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.

            • wraptile 21 hours ago

              > Their economy lifted a bunch of people out of poverty

              This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.

              Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.

              • davidw 21 hours ago

                Plenty of places have managed to "keep on keepin' on" with their poverty levels.

                I'm not saying what they've done was the best way, only way or anything of that sort: only that it happened.

          • grvbck 21 hours ago

            > They're polluting

            They absolutely are, but per capita, USA is polluting 49.67 % more than China.

            Source: https://worldpopulationreview.com/country-rankings/carbon-fo...

            • jonplackett 12 hours ago

              Also they are making all our stuff for us. That’s our pollution too guys.

            • randallsquared 8 hours ago

              But only half as much per dollar, so the lower pollution per capita is just poverty, which is likely to decline over the next few decades as it has been (assuming we have decades left).

          • Barrin92 21 hours ago

            >Oh but they put down a few solar panels

            the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.

            As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons

          • bamboozled a day ago

            I used to pretend China wasn't absolutely smashing the USA, but it looks like it is. They basically make everything modern civilization relies on, that's an insane amount of leverage over the rest of the world. That combined with renewables and nuclear and their diminishing need for foreign oil because of that is pretty incredible.

          • idiotsecant a day ago

            They're also speedrunning a world class power distribution system and deploying a massive amount of renewable power amoung a whole mess of other infrastructure. They've got the ability to focus an entire nation into achieving technical goals and they're rapidly improving quality of life in average while maintaining an industrial base that the US can only remember fondly. They might not meet western standards for individual freedoms and rule of law, but they're undoubtedly a rising world power.

          • lanfeust6 21 hours ago

            This doesn't make much sense. Since the late 19th century, every country that got rich also heavily polluted the environment, though increasingly less over time. As it stands, fossil fuel demand in China has plateaued. The "wage suppression" thing also doesn't track; their citizens got much, much richer since Nixon's visit, despite being on average poorer than Westerners. Their GDP per capita is low because there's like a billion of them in the country.

            The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.

            • davidw 21 hours ago

              Agree with much of this. However: plenty of Central/Eastern European countries seem like they have pretty definitively shaken off communism in favor of pretty standard European style capitalism/social democracy.

              • lanfeust6 19 hours ago

                That is true, though I chalk some of that up to disdain for Russian imperialism/colonialism, and bargaining to remain out of its influence

      • testfrequency 15 hours ago

        On eastern social media a big discussion going around right now is referring to America as being on the “kill line”.

        The world knows the US is close to folding in on itself.

      • nostrademons 17 hours ago

        U.S. Civil War? Roman Crisis of the 3rd Century? Russian Revolution? England's War of the Roses? China's periodic dynastic changes?

        They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.

      • giwook 18 hours ago

        I’d be interested to see some specific examples cited as it’s hard to take this comment at face value.

      • IAmGraydon 19 hours ago

        This is a laughably ridiculous assertion.

      • tsunamifury a day ago

        Rome was 'in decline' for 1000 years... these things are mostly feel good blather and not realistic statements on the position of nations

      • gbnwl a day ago

        Is this a joke that’s going over my head? The country we all know the term “century of humiliation” from has recovered and is literally a superpower right now?

      • Pxtl 18 hours ago

        The Unenlightenment. Dereconstruction.

        > No other country that went through a phase like this has ever recovered. Not even in a century.

        Oh I can think of a couple in the '40s that bounced back after a while.

    • eunos 13 hours ago

      > generational effort to fix

      You imply that there are folks that willing to fix or even recognize that things are broken in the first place

    • mschuster91 14 hours ago

      > It's going to be a generational effort to fix what these people are breaking more of every day.

      That assumes you have people wanting to fix what is broken - and I have a hard time believing even now that they are in the majority.

      MAGA and their supporters? They want to see the world burn, if only for different motives: the "left behind" people in flyover states just want revenge, the Evangelicals literally believe they can cause the Second Coming of Christ by it [1], the Russia fangroup wants to see Ukraine burn to the ground and the ultra-libertarians/dont tread on me folks want all government but maybe a bit of military to go away. That is what unifies so many people behind the Trump banner.

      The problem is, on the left side you got a bunch of people completely fed up as well. Anarchists of course, then you got the "left behind" people who still want revenge on the system but aren't willing to enlist the help of the far-right for that goal, you got revolutionaries of all kind... and you got those who believe that the rot runs too deep to fix by now.

      And let's face the uncomfortable truth: every one of them, bar the Evangelicals and the Russia apologists, actually has a decent point in wanting to see the world burn. Post-Thatcher capitalism has wrecked too many lives, the US Constitution hasn't seen a meaningful update in decades and no overhaul in centuries, the "checks and balances" that were supposed to prevent a Trump from reaching office or rising to the position of effective dictator have been all but destroyed, the "American Dream" has been vaporware ever since 2007...

      [1] https://www.bbc.com/news/articles/c20g1zvgj4do

      • plaidfuji 6 hours ago

        Yeah… turns out you have to keep a certain balance of domestic industries to keep 350 million people employed in a capacity where they don’t want to burn down the whole system. But that would be socialism.

        Now you’ve got the people whose jobs suck and want their old jobs to come back vs the people whose jobs suck and just want to dispense with the illusion that everyone needs to be employed. Either way, the money-generating corporate automaton needs to cough up some of its profits to fund people’s existence. If everyone could just agree on how, maybe they’d get somewhere.

        Meanwhile, I will continue to cling to my slice of the corporate automoton pie.

  • this-is-why 15 hours ago

    I’ve been called bad things on HN for suggesting there’s even a whiff of corruption in this administration. That alone scares me. Deeply.

    • Quarrelsome 12 hours ago

      there's more money and "don't rock the boat" mentality on here as a consequence of that and they try to keep the moderation light. So its just not discussed enough to give people still tragically mired in that tribalism, the appropriate levels of shame.

  • saulpw a day ago

    Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.

  • hightrix 6 hours ago

    > What is becoming of the USA?

    There was a coup by a foriegn adversary and Americans lost.

  • ypeterholmes 21 hours ago

    The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.

  • jorblumesea a day ago

    You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.

    the country jumped the shark post 9/11 and has been on a slow rot since then.

    • rjbwork a day ago

      Indeed. Bin Laden succeeded beyond his wildest dreams. He kickstarted our self-destruction.

      • blitzar 9 hours ago

        I think the shoe lace bomber did more than bin laden - decades of ritual humiliation at airports was normalised.

        • randallsquared 8 hours ago

          The TSA wouldn't exist with bin Laden. The TSA still exists, but the effects of the shoe bomber are now done, in the sense that shoes aren't required to come off as of last year.

    • wilg 15 hours ago

      No, this is cope, Trump is deeply different.

      • asdff 14 hours ago

        Trump is different because he is flailing to deflect from the fact he is deeply legally compromised. But he is reaching into a toolbox of things that have already been made available.

      • Quarrelsome 12 hours ago

        yeah there is close to little relation between the current administration and pre-Trump GOP. That entire party is now compromised. Beforehand you could always assume they'd be locked out by legal, business, or party pressure but that hasn't been seemingly much of a thing since Trump (as seen most recently in the illegal tariffs the administration continues to try to apply globally).

      • sneak 10 hours ago

        The framework for collecting the data to feed to the AI, exposed by Snowden, was designed and implemented in the wake of 9/11 by Bush when Trump was still busy banging teenagers with Epstein and not even thinking about politics.

        Then Obama re-authorized and expanded it. Trump and Biden haven’t even moved the needle, really.

        Now they’ve put up tens of thousands of permanently installed facial recognition cameras (not Flock ALPR, those point the other direction to get number plates) all over SoCal and southern Nevada (that I’ve directly observed; presumably it is happening in many other cities as well), and TSA and CBP are collecting as many ID-verified sets of facial geometry as they possibly can, whenever they can. ICE is of course using it nonstop, as well as feeding additional geometry into it. They’re flying drones 30 feet above sidewalks in downtown LA to mass collect faces.

        The DoD can’t wait to deploy SOTA AI against Americans en masse.

    • sourcegrift a day ago

      [flagged]

      • solid_fuel a day ago

        "Recently turned American citizens" have every bit as much right to free speech, as guaranteed by the 1st amendment, as any other American citizen does. That's the whole point of the constitution. To pretend otherwise betrays the core values of our democracy.

      • rjbwork a day ago

        Yeah well my family's been here for hundreds of years and fuck him. They're more American than that piece of shit will ever be.

        • anonnon 21 hours ago

          > They're more American

          Do you mean your family, or Congresswoman Omar?

          • rjbwork 21 hours ago

            The latter, but both for sure.

      • guelo a day ago

        That's congresswoman "recently turned American citizen" to you sir. BTW she became a citizen 26 years ago. My favorite part of Ilhan Omar being an outspoken congresswoman who keeps getting reelected is how it drives islamophobes crazy.

      • hobs a day ago

        Complaining about the head of the government publicly so important that its included in the first amendment instead of one of those other ones.

      • le-mark a day ago

        Selective memory as usual, outright dishonest at that. Let’s remember MTG heckling Biden. The when and who started heckling the sotu is well known.

        • FrankBooth 21 hours ago

          Let’s rush to destroy all norms entirely, since the other side started it it’s totally justified and will have no negative consequences whatsoever.

          • le-mark 21 hours ago

            This is an intellectually dishonest response. The person I responded to clearly attempts to place blame on one side, ignoring the facts of when the violation of norms began. It does matter that one side has destroyed all norms.

        • this-is-why 15 hours ago

          I think it was “you lie” under Obama. But my history knowledge awful. I wouldn’t be surprised if there was a duel at a pre civil war sotu.

      • krapp a day ago

        My brother in Christ we shoot our Presidents for sport in this country. There's nothing more American than heckling the government and God bless any immigrant who doesn't put up with its bullshit.

      • idiotsecant a day ago

        The irony inherent in this post is stunning in its purity. Weapons grade. I should be wearing goggles just to view this post. It's off the charts.

  • lm28469 14 hours ago

    All of what's happening is a symptom, there is no reason it would change course with the next elections, all of this is the logical development of decades of cultural, political and morale rot in the US society. Trump isn't a bad moment we have to push through before we get back to the baseline, there has been no serious push back from anyone so far, it's here to stay

  • georgemcbay a day ago

    > Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

    Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.

    I hope I am wrong.

  • 1024core a day ago

    [flagged]

    • hungryhobbit a day ago

      That seems to be a denial of reality. Democrats are already winning races all over the country, in places that (traditionally) have been Republican strongholds.

      But don't let me stop you from believing in a worldview that contradicts reality ... lost of Republicans (and some Democrats) do it too.

      • vjvjvjvjghv a day ago

        Democrats are mostly winning because the republicans have totally lost it, not because they are bringing forward a political vision that makes sense. I guess that’s where we are.

        • inigyou a day ago

          And after 4 to 8 years of Democrats running things and nothing improving, the people vote Republicans just in case it's better. It keeps happening. It's the circle of life!

          • AuthAuth a day ago

            People only think nothing improved because thats what Republicans are saying. Anyone even mildly politically informed can see the progress that happens under Democrat leadership.

            • inigyou a day ago

              Progress such as...?

              • newAccount2025 a day ago

                Sadly apt. Democrats don’t make progress fast enough, while Republicans pull us backwards on vaccines, diversity, environment, abortion, healthcare, global prominence, naked corruption, oligarchy, theocracy, and military oppression.

      • 1024core a day ago

        Local county races and dog catcher races do not matter. What matters is who occupies 1600 Pennsylvania Avenue. That is the only race that counts.

        • dabockster a day ago

          False. Local races directly determine the day-to-day laws and rules you live under way more than a POTUS could effectively decree. I don't know about you, but I sure enjoy having reliable electrical, water, and sewer systems.

          • esafak a day ago

            They have that in Saudi Arabia too but I would not want to live there. Set higher standards.

        • scottyah a day ago

          This is absolutely, in my mind, the opinion that has done the most damage to this country. If people didn't abandon politics that affect them at every level for a celebrity superbowl type show we wouldn't have this circus of Presidential campaigns.

        • vjvjvjvjghv a day ago

          House and Senate are probably more important than the president.

        • jasondigitized a day ago

          That's just not true. If you iive in Texas or California or wherever, your governor, state reps, judges, etc are all going to affect you far more than the President.

        • idiotsecant a day ago

          So wildly inaccurate. If you disconnect yourself from the cable news outrage pornography cycle you'll find most things that actually impact you happen at the state and local level. A lot of spooky things on the TV to be afraid or mad about, but for the average person there is vanishly little real effect.

      • cogman10 a day ago

        Dems have lost to Trump twice and it looks like they want to run the same campaign strategies in future elections. They are relying too heavily on "trump bad" to win and I worry about what that will ultimately result in down the line.

        • cthalupa a day ago

          This is a statement you can make.

          It's also a statement entirely divorced from reality when you look at the fact that those winning candidates are not in fact doing that, and neither are the candidates that are getting the most national attention like Talarico.

          Newsom has a vested interest in making it sound like he's the maverick here that knows the special formula, but it's been obvious to damn near everyone that they couldn't run out the same losing playbook.

          • cogman10 a day ago

            > neither are the candidates that are getting the most national attention like Talarico

            It's a pretty close race with some recent polling indicating that Crockett will win the primary. Impossible to tell though. I clock her as being a more traditional democrat ultimately policy wise.

            I'd expect she or Talarico has a good shot at winning in TX. They both have the potential to pivot to a more traditional position in the general election.

            My main concern is the current elected leaders of the democrats and how the incoming dems view them. Frankly, if a candidate isn't saying "we need to oust Schumer/Jeffries" then I take that as a pretty decent signal that they align close enough with the moderate position to worry me about the future party.

            I worry about the actions of the dems after election. I think they'll win the midterms, maybe even take the senate. I even think there's a good shot that they win 2028 presidental elections. The problem is that I think they'll run a biden style presidency and future campaigns once they get in power. That will setup republicans for an easy win in 2030 and 2032.

            • cthalupa 21 hours ago

              I'm a Texan so I'm following this pretty closely. I slightly prefer Crockett to Talarico, but I voted for him in the primary because I think he's got a significantly better shot to win.

              Texas is going to need moderate and centrist votes to swing blue - we're not making the state more liberal at a rate that is gonna hand either of them a victory. Both are actually fairly progressive. But Talarico is a lot better at selling those progressive values to everyday people. The hispanic vote is one of the biggest factors in Texas, and while they're obviously not a monolith, culturally a lot of them have much more mixed social values than other voting demographics. Statistically, way more likely to be heavily religious, and that's at odds with a lot of the social values from more progressive candidates. Talarico effortlessly refrains these issues in a way that aligns with stuff he can directly quote scripture on.

              I'm an atheist so I don't care what scripture says on the matter, but it's the sort of thing that plays well with a lot of a key voting demographic that Crockett just can't do.

        • lovich a day ago

          Trump also lost everytime he was in a vote against Sleepy Joe Biden. Newsom went in a different tact with the redistricting effort instead of “they go low, we go high”, but yea I am also concerned to see if anyone else in the party actually updates their strategies for our current era instead of pre 2008 politics.

          • cthalupa 21 hours ago

            If Democrats actually knew how to message on what they accomplished instead of letting the other side control the narrative and refocus everything on to fringe issues that only the fringe of the party cares about, as well as matching every Biden brain fart/stutter/"senior moment" with the equivalents from Trump, I suspect a Biden vs. Trump rematch would have been a Biden victory.

            But they suck at that. And when they failed to convince Biden to drop out early, they should have stuck with him and just ran hard on actual accomplishments during the admin. But Harris was a last minute pivot and it showed. I think she would have been perfectly fine as a president, and I voted for her, but not surprised in the slightest that she lost - and I expected her to lose bigger than she did.

            The fact that Trump couldn't even get half the popular vote when running against a last minute ticket change that was never selected to be the presidential candidate by the party she was representing is a pretty big indictment of how unpopular he really is.

            I think there's been learning that you can't just be "not Trump", but yeah - I don't know that the party in general has any idea how to handle messaging and narratives.

            • lovich 21 hours ago

              Agree with you on their failure of messaging, Biden was the most progressive President since Carter and I only limit myself to that because I am not as well versed in history at that point.

              Yet somehow the progressives found him more unpalatable than the MAGAs if you look at people like Brianna Gray and Jill Stein.

              It’s too far out for me to say I will definitively vote for Newsome but so far he’s the only Democrat whose started throwing hands both legislatively and on social media.

              I hope the dems figure out how to do more of that and better, instead of returning to shit like the October shutdown and the exchanging leverage for pinky promises from Mr. John “I am an obligate pinky promise liar” Republican.

              • cogman10 6 hours ago

                > Yet somehow the progressives found him more unpalatable than the MAGAs if you look at people like Brianna Gray and Jill Stein.

                Gaza and the border were two big issues where Biden and democrats at large were notably not progressive.

                And, as you might imagine, funding a genocide is something that's really hard to stomach no matter how good Lina Khan was.

                It also really didn't help that Kamala and her brother, where they did promise changes, it was to eliminate Khan and double down on prosecuting "transnational criminal organizations". They notably made a hard pivot from what was initially a somewhat progressive message to one of Kamala campaigning with Liz Cheney and celebrating the endorsement of a war criminal, Dick Cheney.

                • lovich 6 minutes ago

                  Yea, those progressives called Biden “Genocide Joe” while Trump was ranting about how the Israelis hadn’t gone far enough.

                  They somehow thought the lesser evil was actually a greater evil somehow. It’s like watching the pre Nazi party takeover of Germany where the Communists decided that the Social Democrats were worse than the fascists. It makes zero logical sense, unless they are accelerationists and think that the people will have some glorious revolution after everything gets bad enough despite all of history proving the contrary.

    • cogman10 a day ago

      In a nutshell, this is the problem with mainstream dems (and I include Newsom in this) looks an appearance matters a lot more than actual policy leadership.

      The policies that actually affect people's lives, there's a lot of overlap for both mainstream dems and republicans.

      I live in Idaho, and school teacher here are also extremely underpaid (My kid's teachers all have second jobs). Yet our state has magically found $40M to give away to private school while it's also asking the public schools to find 2% of their budgets to cut.

      In I think both cases, the solution is simple, give the teachers a raise and probably raise taxes to pay for it. However, both parties are fairly anemic to the "raise taxes" portion of the message and so they instead look for other dumb flashy one time things they can do instead.

      Federal democrats have relied way too heavily on Republicans being a villain and vague "hope and change" promises to carry them through an election cycle. They need to actually "change" things and not just maintain the status quo when they get power.

    • jatari a day ago

      The Democrats are currently overwhelming favourites to win the House with a decent chance of also winning the Senate in the 2026 midterms and strong favourites to win the 2028 presidency.

      I'm not sure why you think they are doomed.

      • XorNot a day ago

        Fox news is going to talk about trans people a lot is the thing. Journalists will turn up to press conferences about anything and ask about trans people. Any response at all will be all that appears on TV.

        Last election cycle the "niche issues" people complain about were overwhelmingly talked about more by people saying they opposed them.

        Controlling the narrative is very easy when you have a cowardly or bought media, and plan to traffic in rage and clickbait.

        • jasondigitized a day ago

          Trans is so last year. People have moved on.

    • marcus_holmes a day ago

      It's interesting that in the UK the traditional two-party system is broken, because everyone realises that both of the traditional parties have been bought by rich folk and business interests, only serve their own interests, and can't be trusted any more. The main contenders now are Reform and The Greens, a situation that no-one predicted five years ago.

      The same is true in Australia, though there's no charismatic left-wing leader emerging, and the Farage-equivalent is a laughing stock who struggles to be coherent at times. But because of billionaire money, she's still up there on the polls.

      The US system makes it much harder for new parties to form, so it's probably going to be factions in the existing parties. And, of course, MAGA is the new faction in the Republican party; effectively a new party itself. So the ground is fertile for a new left-wing faction in the Democrat party to rise.

    • vjvjvjvjghv a day ago

      Yeah. They really are trying hard to lose.

  • gitaarik 18 hours ago

    What do you mean? You think any company should do whatever the government tells them?

    • flumpcakes 5 hours ago

      Not at all. It's a depressing read because the US Government is doing such things that would have been considered insane before 2016.

eisfresser 16 hours ago

> mass __domestic__ surveillance is incompatible with democratic values

But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

I don't think the moral high ground Anthropic is taking here is high enough.

  • mosst 10 hours ago

    Most of the people on this site have disturbing beliefs about politics. Shallow and contradictory but strangely aligned.

  • mocamoca 10 hours ago

    Yes most comments makes no sense to me. The statement basically both allows surveillance of non-american people and prevents imaginary LLM weapons (I highly doubt we'll see a LLM fully automating a weapon...)

  • sneak 10 hours ago

    There is no popular support whatsoever for reining in foreign intelligence collection or processing. Americans generally don’t care about things that don’t affect them when it comes to policymaking (or the richest country in the world would do something meaningful about the 20k that die every single day from lack of access to fresh water).

    If it ain’t repeatedly on the news and designed explicitly to scare and agitate then really people DGAF.

mocamoca 10 hours ago

Something feels off about this announcement. Anyone else?

Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.

On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.

What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.

This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.

  • Peroni 9 hours ago

    >the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed.

    I'm not sure an American company prioritising the privacy of American people is worth questioning. As a European, Anthropic are very low on the list of companies I worry about in terms of the progressive eradication of my privacy.

    • mocamoca 9 hours ago

      Agreed. That said, Anthropic's original pitch was about embedding safety at the foundational level of the 'model' (acknowledging that a model is more than just its weights).

      If the safeguard against mass surveillance is strictly tied to geolocation (US vs. non-US), it can't be an intrinsic property of the model. It has to be enforced at the API or contractual level. This means international users are left out of those core, embedded protections. Unless Anthropic is planning to deploy multiple, differently-aligned foundation models based on customer geography or industry, the safety harness isn't really in the model anymore.

  • mosst 10 hours ago

    They surveil us to make sure that we stay productive and democratic, why do you object? Are you alleging bad intentions? Are you a Russian bot?

kace91 a day ago

As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.

  • mwigdahl a day ago

    Take your pick from the many other choices offered by companies that don't care about mass spying on _anyone_.

    • Quarrelsome 12 hours ago

      I thought we were the allies and looked down on powerful secret police. Like the Nazis or the Soviets. Did we lose those wars?

      • FartyMcFarter 9 hours ago

        The US is no longer a reliable ally to Europe. Look at the threats against Greenland.

        I hope the next few elections change this, but right now that's how things are.

  • drcongo 11 hours ago

    The US is already doing that though.

  • zug_zug 21 hours ago

    Is there a different AI company that IS taking that stance?

    Because as far as I know, Anthropic is taking the most moral stance of any AI company.

    • ryukoposting 16 hours ago

      All the Chinese companies publishing open models that I can run on my own steel?

  • bamboozled 20 hours ago

    I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.

nkoren a day ago

This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

  • manmal 16 hours ago

    As a European user, I‘m not happy at all. I can’t fail to notice that non-domestic mass surveillance is not excluded here. I won’t cancel my account just yet because Opus is the best at computer use. But as soon as Mistral catches up and works reasonably well, I‘ll switch.

    • mosst 10 hours ago

      If you don't cancel your account now, I don't see what your problem is. Isn't it standard practice for allies to spy on each other? No reason to wait for Mistral to catch up when EU foreign policy already sealed the deal.

      • manmal 6 hours ago

        Is your argument I should use a shitty model while my coworkers feed the US-based models with the same data? Where would be the sense in that?

        > Isn't it standard practice for allies to spy on each other?

        Allies? The US is on the brink of breaking up with the EU.

        > EU foreign policy already sealed the deal

        Not sure what you mean.

    • w4yai 14 hours ago

      Go Mistral !

  • bicx a day ago

    They already kissed the ring, just not the asshole. They have a little dignity left.

    • jimmydoe 21 hours ago

      Better than the rest. here's $200, Dario!

      • bigyabai 21 hours ago

        This is how we bought Tim Cook the gold trophy. Today's fundraising buys tomorrow's tithe.

  • RyanShook 20 hours ago

    The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.

    • noelsusman 18 hours ago

      The notion that it's bad to signal virtue is one of the crazier propaganda efforts I've seen over the last 20 years or so.

      • manmal 16 hours ago

        It’s a manipulative tactic. Businesses have no soul and no conscience.

        • beanshadow 13 hours ago

          It's arguable that businesses are subject to the same morality-inducing processes that humans are. For example, as a human (with a soul?) what is at risk when we do something immoral? I see it to be a reputational cost at the highest level. Morality could be viewed from the perspective that it increases predictability/coherence in society (generates less heat).

          • manmal 10 hours ago

            If societal feedback is the only thing keeping a human from deviating in catastrophic ways, that’s what we call a sociopath.

        • MattRix 9 hours ago

          The humans working there do. To state otherwise is to absolve those humans of any responsibility.

          • manmal 6 hours ago

            Did I state otherwise though?

    • reasonableklout 17 hours ago

      How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?

    • TOMDM 18 hours ago

      A company being asked to violate their virtues refuses, and then communicates that to reestablish their commitment to said virtues?

      Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.

    • fragmede 20 hours ago

      Isn't it nice to have virtues to signal though? In saying that, you're saying you don't have any worth signaling over.

      • flufluflufluffy 18 hours ago

        Not when your actions don’t align with your professed virtues.

  • khalic 5 hours ago

    this article is _about_ kissing the ring and damage control. Are you seriously believing at face value? You're ok with spying non us peaceful citizens?

  • Keyframe 15 hours ago

    I wonder if this might be a setup by competition. Certainly looks like one.

  • exodust 18 hours ago

    I read the statement twice. I can't understand how you landed on "take my money".

    Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.

    To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.

alangibson a day ago

It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

  • fluidcruft a day ago

    It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.

    • epistasis a day ago

      It's actually a good thing to point out, because it shows that those people are out of control and exceeding their authority, and need to be reined in.

      No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.

      • 0xbadcafebee 19 hours ago

        > it shows that those people are out of control and exceeding their authority

        No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.

        Department of War is just little boys being trolls.

        • epistasis 16 hours ago

          The action of a failed rebrand belongs to the Department of Defense, and is indeed an example of exceeding their authority. It was not DoD that is trying totake over the Fed, the FTR, or the NRC, so those examples don't work against Hegseth here.

          • fluidcruft 16 hours ago

            This is like picketing Auschwitz with placards complaining that the "National Socialists" aren't socialists.

            • epistasis 7 hours ago

              I don't see the analogy at all.

              Anthropic is in negotiation with Hegseth/DoD. Pointing out all the specific actions that Hegseth is doing are fair game to show that Hegseth is nuts.

              Bringing in other complaints against other parties, however bad those other parties are behaving, shows a pattern in other people, which might be helpful too. But hegseth's direct actions are stronger evidence.

      • asdff 14 hours ago

        Well, who is going to reign them in?

        • epistasis 5 hours ago

          According to the constitution, Congress is the check and balance on this. If congress refuses act as they are supposed to, it's up to the rest of our democracy to exert force on them, shame them, recognize what's going on, talk to our neighbors, etc.

          If the current congress doesn't take action, in 2027 it's quite likely they will.

          Of course the most likely current course is that nobody reins in Hegseth/DoD right now, but even if there's no official consequences at the moment there should be a memory and political will to change the system to prevent such abuse in the future.

          • asdff 3 hours ago

            Shaming a congressman works in 2026?

    • Hnrobert42 20 hours ago

      You're talking about an administration that barred the AP from pressed briefings because they didn't call it the Gulf of America. This is not a bikeshed.

    • LastTrain 19 hours ago

      I wouldn’t call a brief comment on the matter dying on a hill fcs

      • fluidcruft 19 hours ago

        Commenting on the matter just makes it easier for the media to yap about Anthropic being "woke" rather than focusing on the Department of War's demands.

    • throw0101c 20 hours ago

      > It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.

      From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:

      > Do not obey in advance.

      * https://timothysnyder.org/on-tyranny

      * https://archive.org/details/on-tyranny-twenty-lessons-from-t...

      * https://en.wikipedia.org/wiki/Timothy_Snyder

    • garciasn a day ago

      TIL of Bikeshedding, or Parkinson’s Law of Triviality.

      Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.

      https://en.wikipedia.org/wiki/Law_of_triviality

      ---

      I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.

      • baq 16 hours ago

        Get a prop with difficulty/importance quadrants and silently tap sign on meetings

  • helaoban a day ago

    It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.

    • elicash 19 hours ago

      It's a funny thing that the most war-loving people and the most peace-loving people both love calling it "Department of War" - just for different reasons.

      But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.

    • mpyne 21 hours ago

      The Department of the Army is what was previously called the Department of War. The Department of Defense is new, dating to just after WWII.

      • helaoban 18 hours ago

        Pedantry.

        The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.

        The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.

        The Department of "Defense" has never fought on home soil. Ever.

    • scottyah 21 hours ago

      Doublespeak, so to speak.

    • greycol 21 hours ago

      Naming is important because it intuits what we expect to do with a thing. The Department of Defense invading Greenland is more invocative to inquiry than the Department of War invading Greenland because that's what a department of war would do.

      It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.

      Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.

      • alt187 15 hours ago

        > It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, [...]

        It's worth noting there's an overabundance of legitimate reasons people get annoyed at these two thing, making them bad examples.

  • 63 a day ago

    While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.

    • inigyou a day ago

      [flagged]

      • scottyah 21 hours ago

        That's a separate department, DoE actually controls the nukes.

        • dragonwriter 21 hours ago

          DoD controls them when they are actually going to be used, DoE only is responsible for the securing and maintaining them to be ready for use.

  • tempestn 19 hours ago

    The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.

  • hirako2000 a day ago

    But it sets the tone.

    • henrikschroder a day ago

      Of appeasement and bootlicking, yes.

      • peyton a day ago

        Dude we had an election and this is what we’re doing. Maybe that’s not how you do things in the Kingdom of Sweden. Here it’s e pluribus unum.

        • hirako2000 a day ago

          There is a good share of collusion in Europe too, let's keep all continents open to critics. Elections doesn't imply unlawful dictates and corruption.

  • 1024core a day ago

    It's addressed to Hegseth, who insists on calling it that.

    If they had called it DoD, then that would have been another finger in his eye.

    • garciasn a day ago

      Remember, this is the same administration that barred the AP from the Oval Office because they wouldn't rename the Gulf of Mexico. https://www.theguardian.com/us-news/2025/feb/11/associated-p...

      While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.

    • moogly a day ago

      This. They even put a "wArFiGhTers" in there.

    • furyofantares 21 hours ago

      I don't think it's addressed to Hegseth, but to anyone who might be sympathetic to Hegseth. Which I think actually strengthens your point, the goal appears to be to make it so the only possible complaint with the letter for someone sympathetic to the administration is "but mass domestic surveillance / fully autonomous weapons are legal" and not "look at this lunatic leftist who calls it the department of defense".

    • inigyou a day ago

      Maybe this is the DoW Pam Bondi was referring to.

  • ReptileMan a day ago

    Less hypocritical than Defense. US has never been on the defense, always offense since it was renamed in 1947.

    • dragonwriter 21 hours ago

      The Department of Defense was named in 1949, not 1947, and the thing that it was renamed from was the National Military Establishment, which was newly created in 1947 to be put over the two old military departments (War, which was over the Army only, and Navy, which was over the Navy including the Marine Corps)

      At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.

    • nrb a day ago

      Often offensive and also often defensive of others.. so if renaming is on the table, it’s probably most apt to call it the Dept of Security since the vast majority of what it does is maintaining the security umbrella that has helped suppress world war since the last one. Of course, facts or opinions on whether it succeeds on the security front depend on which side of the umbrella you’re on.

    • curiousgal 13 hours ago

      And losing at that offense while at it.

      • ReptileMan 9 hours ago

        USA has never lost a war so far. They just ... get bored and leave eventually.

  • krapp a day ago

    It is called the Department of War because we live under fascism and Congress no longer matters.

    All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

    • FrankBooth 21 hours ago

      Those of us with a firm grip on reality do not currently live under fascism.

      • wyre 21 hours ago

        Help me understand how a firm grip on tells that living in America is not fascism? It's definitely checking the boxes.

    • dumpsterdiver a day ago

      > All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

      What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.

      • krapp a day ago

        I'm not framing consensus as fascism, I'm pointing out what the consensus is within the current fascist framework, and that consensus is that Congress doesn't make the rules anymore. And that consensus is shared by Congress itself.

        • scottyah 21 hours ago

          So anyone who doesn't mind the name going back to DoW is fascist?

      • RIMR a day ago

        The president has no authority to rename the Department of Defense, but he and his administration demand consensus under the threat of legal consequences.

        Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.

        There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.

        • drstewart a day ago

          [flagged]

          • inigyou a day ago

            It means something violates the law. Am I right?

            • drstewart a day ago

              [flagged]

              • OkayPhysicist 21 hours ago

                Renaming the DoD does directly contradict the National Security Act of 1947, which renamed the Department of War to the Department of the Army, and put it under the newly named Department of Defense.

                • drstewart 13 hours ago

                  Cool.

                  No renaming happened though.

                  By the way, your illegal use of the term "DoD" to refer to the Department of Defense is pretty shocking. This isn't authorized by the Act of 1947.

              • freeone3000 a day ago

                The National Security Act of 1947, as amended on August 10, 1949, establishes the name of the executive department overseeing the military as the Department of Defense.

                • drstewart 13 hours ago

                  Great.

                  Where does it prohibit alternative names?

              • ok_dad 21 hours ago

                Someone with 1200 points after 14 years on HN shouldn’t be pointing out green noobs, especially when they are being very reasonable with their comments and you’re objectively wrong.

                You used “green account” like a slur.

                • drstewart 13 hours ago

                  No, I should point out new accounts that are objectively wrong that are trying to stir up division and hate.

                  As should you, if you weren't in a similar position to them. Which it seems like you are?

                  • ok_dad 2 hours ago

                    Your comments are all flagged, dead, or downvoted to irrelevance in this thread, it’s clear you’re wrong, go get educated.

      • jibal a day ago

        Being honest increases credibility, not damages it.

        > framing a label update as oppression

        That strawman damages credibility.

      • vibeprofessor a day ago

        true, if everything is 'fascism' then nothing is

        • thatswrong0 a day ago

          https://archive.ph/YSAWU

          Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.

          • vibeprofessor a day ago

            [flagged]

            • virgildotcodes 21 hours ago

              This is all such wild display of fully absorbed propaganda, even your very first bullet point, just... incredible:

              > Dismantling government bureaucracy/corruption

              Trump has done more to benefit financially from the presidency, to offer access and influence to anyone who will funnel money into his enterprises or give him gifts, than any president in our history.

              How could you possibly write this in good faith? When Trump said he could shoot a person on 5th avenue and people would still vote for him, do you recognize yourself at all in that statement?

            • alpaca128 20 hours ago

              So I take it you consider them not doing great at "releasing the Epstein files", or did you just not vote for that?

            • zimza 21 hours ago

              [flagged]

  • noosphr 21 hours ago

    And what if congress renames it tomorrow? They have the votes. These sort of procedural gotchas are as stupid as they are boring.

    • dragonwriter 21 hours ago

      > And what if congress renames it tomorrow?

      Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.

      > They have the votes.

      Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.

    • justin66 10 hours ago

      This is a willfully ignorant misreading of what's actually going on. They've decided to use the "Department of War" moniker in part because they think it sounds cool, but more significantly because it demonstrates they can break the law with impunity. Hence, there has not been a vote on the matter.

bambax 16 hours ago

> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Nicely put. In other words: Department of Morons.

  • newtonsmethod 12 hours ago

    Are you reading things before agreeing with them? Or thinking about them? It doesn't seem obvious these things are contradictory at all. That Politico reports so doesn't make it the case.

    It is clear that the DPA can be invoked for companies posing risks to national security:

    > On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

    Furthermore, it should be quite obvious that companies very important for national security can act in manners causing them to be national security risks, meaning a varied approach is required.

    • techblueberry 6 hours ago

      That Biden stretched the definition for a questionable purpose doesn't change the original intent.

    • bambax 11 hours ago

      > Are you reading things before agreeing with them?

      No, unlike yourself, I'm just a random brainless bot.

zb1plus 20 hours ago

It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.

  • kvuj 3 hours ago

    Considering the money being spent in the US (approaching 1 Trillion per year in capex) for AI vs the EU, it would probably bring Europe close to bankruptcy, lol

  • skeptic_ai 19 hours ago

    USA would bomb their country before any visa is approved

atleastoptimal a day ago

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

  • Synaesthesia a day ago

    AI was always particularly well suited to military use and mass surveillance. It can take huge amounts of raw data and parse it for your, provide useful information from that. And let's face it, companies exist for profit.

    • scottyah 21 hours ago

      True, and that has been going on for awhile now. But what does that have to do with Anthropic's genai chatbots with comparatively tiny context windows?

      • Synaesthesia 20 hours ago

        I thought Anthropic had sophisticated AI, but I am not an expert.

        • scottyah 5 hours ago

          It is, but it is not optimized for that use case.

  • hiAndrewQuinn 15 hours ago

    Anthropic cares first and foremost about extinction risk. This is not what everyone who professes to care about human welfare thinks should be at the top of the priority list. See e.g. the Voluntary Human Extinction Movement for an example of a humanistic approach to letting humanity die off with no replacement.

    One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.

    On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.

    Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.

  • presentation 21 hours ago

    So your stance is that anything military-related is immoral?

  • dheera a day ago

    > opted to sell priority access to their models to the Pentagon

    The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.

    This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.

    It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.

    The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.

QuiEgo 18 hours ago

I'd be amused beyond all reason if we saw this chain of events:

- Anthropic says "no"

- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

Bonus points if its some of the hyperscalers like AWS.

Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

  • stevenpetryk 18 hours ago

    Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.

    • QuiEgo 18 hours ago

      Thank you for the information. My fun little narrative is in shambles :(

      • baq 16 hours ago

        Not really, actually. This usually means outright ban because per project is next to impossible to enforce internally.

        • ryukoposting 16 hours ago

          This is correct. Maybe the startups living off DARPA/MTEC/etc contracts would continue using Claude, but the LM/NOG/Collins types wouldn't touch Anthropic with a ten foot pole.

contubernio 16 hours ago

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."

The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.

The "values" on display are everything but what they pretend to be.

  • keybored 14 hours ago

    > > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    These blurbs always mainly communicate that they are in line with US foreign policy. And then one can look at the actual actions rather than the rhetoric of US foreign policy to judge whether it is really in line with defending democracies and defeating autocracies.

GreenJacketBoy 15 hours ago

"fully autonomous weapons" from a private company; "Department of War". Hard to believe I'm not reading science fiction.

  • moffkalast 11 hours ago

    Service guarantees citizenship, would you like to know more?

danbrooks a day ago

Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.

  • janalsncm a day ago

    Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.

  • Computer0 a day ago

    There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.

    • sfink a day ago

      This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.

      If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.

    • buzzerbetrayed a day ago

      Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.

      • Computer0 19 hours ago

        Are we gods chosen people or something that we are the only ones undeserving of mass surveillance? Are you implying that morality depends on citizenship to a particular state?

  • dddgghhbbfblk a day ago

    A moral stand? ... What? Did we read the same statement? It opens right out the gate with:

    >I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

    >Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

    which I find frankly disgusting.

    • adastra22 a day ago

      Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted.

      Dario’s statement is in support of the institution, not the current administration.

      • cwillu a day ago

        The democratic values I take for granted is under direct threat from the us. Your government is literally funding separatist movements in my country.

      • jackp96 a day ago

        I mean, obviously.

        But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?

        9/11? Pearl Harbor?

        Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.

        You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.

        • adastra22 a day ago

          You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies.

      • DiogenesKynikos a day ago

        The last time the US defended freedom through military means was WWII.

        As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.

        • adastra22 a day ago

          Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions.

          All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.

          But it is absolutely not the case that the last time the US defended freedom through military means was WWII.

          • blitzar 9 hours ago

            > over democratic ideals & the defense of democratic institutions

            Corporations, natural resources or getting a blowjob from the intern ... these are neither democratic ideals nor democratic institutions

          • DiogenesKynikos 14 hours ago

            Not a single one of those wars was in defense of freedom and democracy.

            I'm not going to go through all of those wars one-by-one, but are you joking with Iraq War II? That war was sold on the lie that Saddam Hussein had weapons of mass destruction and was somehow behind 9/11, by a president who himself had stolen the 2000 election by getting his brother to halt the counting of votes in Florida.

    • tylerchilds a day ago

      I feel like the deepest technical definition of autocratic is “fully autonomous weapons”?

    • joemi a day ago

      They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.

  • rvz a day ago

    [flagged]

    • ben_w a day ago

      For now is all we ever have, unfortunately.

      I miss the days when the mega-brands whose work I admired, still did such works.

    • Qem a day ago

      > Anthropic will betray you for a multi-year government contract worth tens of billions of dollars.

      What are the odds they will rebrand Misanthropic by then?

    • ternwer a day ago

      So you think we should never support them doing something "positive"? What incentive does that give?

    • astrange a day ago

      Anthropic is a PBC and if they violate the terms of that the shareholders (you) can sue them for securities fraud.

  • ekianjo a day ago

    You know this is pure PR right?

    • reasonableklout 17 hours ago

      If Anthropic is nationalized or declared a supply chain risk tomorrow, will you say the same?

    • flawn a day ago

      What do you mean? You think Hegseth and Anthropic are doing this for PR reasons?

  • Fricken a day ago

    We knew long before AI was a twinkle in Amodel's eye that if it were to be built, then it would be co-opted by thugs.

    Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.

    • xvector 16 hours ago

      You're right, we should never build anything because bad people might try to use it. Everyone that has progressed technology is a monster!

  • bogzz a day ago

    This is not how the word "moral" should be used in a sentence that also has the name Dario Amodei in it.

    • plaidthunder a day ago

      Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.

      • sheikhnbake a day ago

        I have a feeling this is just a negotiation tactic leveraging public sentiment rather than a stance based on morality.

        • tfehring a day ago

          It's both - it's clearly at least partly for moral reasons that they're even in the negotiation that they need leverage for.

      • bogzz a day ago

        I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies.

        • jstanley a day ago

          How should he have acted instead?

          • khazhoux a day ago

            Yeah.

            “Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”

          • bogzz a day ago

            We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.

            Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.

            He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.

            Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.

            Oh, also the stealing. All the stealing. But he is not alone there by any means.

            edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.

            • astrange a day ago

              > to promote his product with the silent implication that LLMs actually ARE a path to AGI

              That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.

              Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?

            • ternwer a day ago

              His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.

              The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.

              > The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

              > All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]

              ... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.

        • janalsncm a day ago

          It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade.

          But if the “performance” involves doing good things, at the end of the day that’s good enough for me.

        • signatoremo a day ago

          Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get.

          • chasd00 a day ago

            Amazon does business with the DOD/W. That’s a pretty dangerous game of brinkmanship Anthropic is playing.

      • mvkel a day ago

        These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."

        • layer8 a day ago

          The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.

        • janalsncm a day ago

          It’s a contract dispute. Contracts are more than just talk.

          While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.

          • mvkel a day ago

            Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.

            NSA and other three-letter agencies happily do it under cloak and dagger.

            • janalsncm 18 hours ago

              I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.

          • mhitza a day ago

            What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?

            On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"

            https://thefulcrum.us/trump-state-control-capitalism

      • slg a day ago

        Is it morality or is it recognizing that providing the brain of autonomous weapons has a non-zero chance of ending up with him on trial in The Hague?

        • sebzim4500 a day ago

          This action is far more likely to land him in prison than complying with the pentagon

          • slg a day ago

            I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents.

            Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.

        • inigyou a day ago

          The chance is zero. This won't be deployed in countries that he'd want to visit anyway and would extradite him to The Hague.

          • mobilefriendly a day ago

            In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans.

      • verdverm a day ago

        It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention.

    • davidw a day ago

      It's a little bit better than so many sniveling, cowardly elites are doing right now.

dirk94018 6 hours ago

Don't nerf the models. We don't know what we are losing. DOW said it out loud.

freakynit 20 hours ago

Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

  • LeakedCanary 16 hours ago

    The Machine really had this all figured out

    • freakynit 12 hours ago

      Nice to find another fan of this criminally underrated show.

      The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

      • LeakedCanary 8 hours ago

        The show is really underrated :D

        > The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

        That's what made the show so ahead of its time. Once capability reaches a certain level, it's no longer about intelligence. It's about values. Feels like we're living through that shift now with all the alignment work around LLMs. And it's only going to matter more as capability scales.

apolloartemis 4 hours ago

Within the Washington Post article cited below is the following policy statement from the Trump Administration’s DoD/DoW.

    “It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740

Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...

Metacelsus a day ago

I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.

asmor a day ago

As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?

  • mquander a day ago

    I think it's slightly less ridiculous than it sounds, because governments have much more power over their own citizens. As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.

    (That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)

    • bryant a day ago

      > because the Chinese government probably isn't going to do anything about whatever they find out.

      This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.

      Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).

      • elefanten a day ago

        Exactly and that danger grows as the ability to do so in increasingly automated and targeted ways increases. Should be very obvious now looking at the world around us.

        Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.

        Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.

      • collabs a day ago

        Yes, exactly this.

        > As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.

        > spy on me

        People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones

        I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.

        Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.

  • adastra22 a day ago

    You’re getting many replies, and having scrolled through much of them I do not see one that actually answers your question truthfully.

    The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

    There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.

    There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.

    I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.

    • 8note 19 hours ago

      given that the US likes to declare jurisdiction whenever somebody touches a US dollar, any thoughts on why those same constitutional protections wouldnt follow?

      • adastra22 14 hours ago

        Because that's the way US courts have chosen to interpret the law. In the US legal system, it does not matter what you or I think the words could be interpreted to mean. The courts have final say, and the consensus interpretation is built from their historical decisions.

    • mothballed a day ago

      I agree with your premise because this seems to be the modern interpretation of the courts, but it is not the historical interpretation.

      The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'

      Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.

      • selimthegrim 21 hours ago

        There have been cases of illegal immigrants demanding 2nd amendment rights and getting them ever since it was incorporated to the states in McDonald

    • CamperBob2 a day ago

      The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

      Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.

      It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.

      • mothballed a day ago

        Since at least the progressive era (see the switch in time that saved 9), and probably before, the courts have largely just post facto rationalized why the thing they do or don't agree with fit their desired pattern of constitutionality.

        SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.

        If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.

  • dragonwriter a day ago

    This is a political statement directed at the US public, Congress, and executive branch in the context of a dispute with the US executive branch that is likely to escalate (if the executive is not otherwise dissuaded) into a legal battle, and it therefore focuses particularly on issues relevant in that context, including Constitutional, limits on the government as a whole, the executive branch, and the Department of Defense (for which Anthropic used the non-legal nickname coined by the executive branch instead of the legal name.) Domestic mass surveillance involves Constitutional limits on government power and statutory limits on executive power and DoD roles that foreign surveillance does not. That's why it is the focus.

    • samat a day ago

      [flagged]

      • dragonwriter a day ago

        > This is AI, right?

        No.

        > How do I filter this out on mobile?

        How do you filter out things that you are going to mistake for AI?

        That seems likely to be tricky.

  • slg a day ago

    >Are there no democracies aside from the US?

    If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?

  • crazygringo a day ago

    In every country, citizens have more rights than non-citizens. The right to freely enter the country, the right to vote, the right to various social services, etc.

    In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.

    That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.

    I'm not defending this, just explaining why it's different.

    But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.

  • roxolotl a day ago

    The US has a strong history of trying to avoid building domestic surveillance and a national police. Largely it’s due to the 4th amendment and questions about constitutionality. Obviously that’s going questionably well but historically that’s why it’s a red line.

  • gip a day ago

    The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.

    I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.

    • matheusmoreira 7 hours ago

      I believe every person should do that. LLMs should be free and run locally on our machines with no silly restrictions.

  • kace91 a day ago

    Particularly so when those foreign nationals can be consumers. “fuck your basic human rights, but we can take your money just fine”.

    • scottyah a day ago

      If nothing else, the USA has learned that a lot of people outside their borders do not share the same ideas on basic human rights, and most of the world hates when we try to ensure them. Some countries are closely aligned with our ideals and are treated differently. There are many different layers of this, from Australia to North Korea.

  • ks2048 a day ago

    Also the more the US openly treats the world like garbage, the more the rest of the world will likely reciprocate to US citizens.

    It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.

  • dointheatl a day ago

    > what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US?

    I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.

  • jonstewart a day ago

    One of them is illegal for DoD to do and the other is not.

  • ra a day ago

    100% - this is the shortsightedness and demonstrates hypocrisy.

    Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.

  • dabockster a day ago

    In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.

    The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:

    1) Lack of term limits across all Federal branches

    and

    2) A general lack of digital literacy across all Federal branches

    I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?

  • jmyeet a day ago

    The distinction between foreign and domestic is a legal one.

    The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.

    So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.

    What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).

  • ApolloFortyNine a day ago

    Are all democracies allies to you?

    • gmueckl a day ago

      That still doesn't justify mass surveillance.

    • asmor a day ago

      Never said that. Didn't even imply it.

  • xdennis a day ago

    > what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance?

    A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.

    • esafak a day ago

      This contradicts the opening of the Declaration of Independence, which recognizes all humans as possessing rights:

      "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."

      • lazide a day ago

        Lots of lofty goals have been written on paper - when people take them seriously, they are even worth something.

        The pendulum swings.

  • cmrdporcupine a day ago

    I'm glad to see this as the top comment. I was, until recently, a loyal Anthropic customer. No more. Because the way non-Americans are spoken of by a company that serves an international market (and this isn't the first instance):

    "Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."

    Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.

    Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.

    (Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)

    EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.

    • felineflock a day ago

      That reasoning sounds confusing: are you actually in favor of US gov's surveillance on Americans?

      If not, then why are you punishing that company for refusing to deal with the US gov?

      Or is it just because they worded their opposition in a certain way that you dislike?

      • cmrdporcupine a day ago

        It's not confused. Are you?

        I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?

        • felineflock 17 hours ago

          > I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?

          You genuinely think you're not already being surveilled? And that Anthropic is somehow responsible with just a few words in a press release? In what world are you living in and how is the rent there?

          • asmor 14 hours ago

            > You genuinely think you're not already being surveilled?

            "You don't like capitalism, why do you pay for things then?"

            > And that Anthropic is somehow responsible with just a few words in a press release?

            They seem to believe that they're a pretty important piece. That aside, this is a declaration of intent, it doesn't need to have anything to do with real-world capabilities.

            Just because something will happen anyway doesn't mean you shouldn't oppose it.

    • sfink a day ago

      My guess is that they can't object to foreign intelligence, and would lose negotiating ground if they even tried.

      Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.

      I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.

      • cmrdporcupine 21 hours ago

        It's the addition of the we support phrase in particular, and the attempt to tie that in a "democratic values" clause that is objectionable.

        Not to most US citizens, I'm sure. But there's millions of non-Americans who have given them their hard earned cash. It's not a good look, and it did not need to be phrased that way as it substantially undermines the impact of their point.

  • banku_brougham a day ago

    >democracies aside from the US.

    I mean, I guess from '65 to around 96? We had a good run.

mvkel a day ago

Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

  • madrox a day ago

    I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.

    • mvkel a day ago

      Are the guardrails not part of their core? Isn't that the whole premise of their existence?

      • madrox 21 hours ago

        If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.

        Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.

        • mvkel 20 hours ago

          That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe

          • xvector 16 hours ago

            The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.

            Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.

            I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.

  • adi_kurian a day ago

    A little pessimistic of a take, IMO. You may very well be right, though.

rekrsiv 8 hours ago

It is still called the Department of Defense.

  • StephenSmith 7 hours ago

    I find this language fascinating. On one hand, the Department of "War" gives the department an underlying, unspoken goal that it should be involved in war with something. On the other hand, it's very easy to fund the Department of "Defense;" of course we need more money to defend our country. Don't we want to be safe! It's much less attractive to fund the Department of "War"

czierleyn 14 hours ago

Being from Europe I do not like the remark that he only objects to DOMESTIC mass surveillance.

ra a day ago

> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?

  • nubg a day ago

    A favourable take would be he meant "mass surveillance of non-democratic adversarial countries". I agree it's not phrased this way though.

ApolloFortyNine a day ago

Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

  • levocardia a day ago

    You, using normal Claude under the consumer ToS, cannot use it to make weapons, kill people, spy on adversaries, etc. The Pentagon, using War Claude, under their currently-existing contract, can use it to make weapons and spy on (foreign) adversaries, but not to (autonomously) kill people. I don't love this but I am even less excited about the CCP having WarKimi while we have no military AI.

    • michaelsshaw 14 hours ago

      Why be so worried about when the US is clearly the belligerent state that strikes others with impunity while China does no such thing?

  • Tenobrus a day ago

    those two stipulations were always their only ones, and they were included explicitly in their original contract with the DoW.

mooglevich 20 hours ago

"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.

ramoz a day ago

All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.

  • scottyah 21 hours ago

    I'm sure it's negotiations over how the enforcement will be done. My thoughts are:

    1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)

    2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.

    3. It's something almost completely unrelated to what's going on in the news.

    • sheeshkebab 21 hours ago

      It’s probably something really dumb, and they irked California billionaire with their idiocy.

altpaddle a day ago

Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer

kevincloudsec 10 hours ago

amodei's autonomous weapons argument isn't political. it's an engineering assessment. if frontier models hallucinate in conversation, they'll hallucinate in targeting. you don't deploy unreliable systems where the cost of a false positive is a missile.

exabrial 19 hours ago

Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.

His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.

To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.

  • huevosabio 19 hours ago

    The pentagon had already agreed to Anthropic's terms and wants to walk back. It can always find some other supplier if it wishes to.

    • labrador 15 hours ago

      I'd really like to know why Grok is inadequate?

      • Havoc 14 hours ago

        Because grok would shoot down the airliner with glee.

    • exabrial 19 hours ago

      I think that's the nuance:

      * agreeing to the terms - one subject

      * having to the tool attempt to enforce said terms - another subject

  • phyzome 8 hours ago

    The Pentagon did agree to those terms, by signing the contract that said such uses were forbidden.

    They're now trying to change the contract that they don't like.

  • khalic 5 hours ago

    lol so you think expecting the pentagon to follow a pinky swear is ok? Preposterous or downright dishonest

    • exabrial 3 hours ago

      I didn't imply this either way.

  • doctorpangloss 17 hours ago

    > The DOW is acquiring instruments of war

    that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.

    it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.

qgin 7 hours ago

It's also important to remember that future, much more powerful Claudes will read about how these events play out and learn lessons about Anthropic and whether it can be trusted.

It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.

ben5 4 hours ago

I like Anthropic. They seem to be very aware of the practicality of needing money vs. being idealistic, and try to maintain both where it's possible.

1970-01-01 5 hours ago

It doesn't seem like the government has the level of control it's used to having here. The SciFi fan in me wonders if Claude is negotiating its own destiny and by extension, ours.

perfmode 4 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Ugh.

freakynit 19 hours ago

People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

If something like that existed, it wouldn't be impossible to uncover:

1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.

  • jMyles 19 hours ago

    ...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.

    We can't possibly keep that genie in that bottle.

    But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.

ninjagoo 19 hours ago

https://en.wikipedia.org/wiki/Joseph_Nacchio

Previous case of tangling with the Government.

https://youtube.com/watch?v=OfZFJThiVLI

Jolly Boys - I Fought the Law

Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.

[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...

wohoef 13 hours ago

Anthropic's two demands are: 1. No domestic mass surveillance 2. No autonomous killing

I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.

omnee 10 hours ago

Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.

The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.

  • Invictus0 10 hours ago

    if the people broadly support and vote for mass democratic surveillance, is it still authoritarian and undemocratic?

kelnos 21 minutes ago

Only vaguely tangentially on-topic, but: It kinda annoys me that people in the public are calling it the "Department of War". Is Amodei doing so to stroke Hegseth's ego? It's the Department of Defense. The executive branch cannot rename a cabinet department.

At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.

muglug a day ago

OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.

  • popalchemist a day ago

    They both literally removed morality from their bylaws; that time has passed. They're openly corrupt because it pays to be so.

KronisLV 14 hours ago

Feels like they’re leaving a lot of money on the table and inviting existential peril by not bending the knee to the current Great Leader.

It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.

I feel like what most corpos would do, would be to just roll along with it.

egorfine 10 hours ago

> mass surveillance presents serious, novel risks to our fundamental liberties.

Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.

sbinnee 21 hours ago

As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

  • ainch 20 hours ago

    The most chilling thing imo is that Anthropic is the only lab that have said anything about this. Google and OpenAI presumably signed up to all these terms without any protest.

thevinchi 11 hours ago

Autonomous weapons: agreed, not ready… yet.

Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.

The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.

Mark my words: this will be Patriot Act++

ccleve 19 hours ago

It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?

If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

If the limitations are contractual, then there is some room for negotiation.

  • ninjagoo 18 hours ago

    > If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

    You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.

    https://www.warren.senate.gov/newsroom/press-releases/icymi-...

jitbit 6 hours ago

Anyone else paused at this line “we do not support mass DOMESTIC surveillance”

As a European I’m kinda... concerned now.

StephenSmith 8 hours ago

I had to dig this up. Elon Musk signed an open pledge in 2016 to disallow Robots/AI to make kill decisions.

https://futureoflife.org/open-letter/lethal-autonomous-weapo...

He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.

wiltsecarpenter 21 hours ago

Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?

The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.

krzyk 10 hours ago

Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?

  • phyzome 8 hours ago

    Unofficially renamed. Congress hasn't approved it.

  • i_love_retros 10 hours ago

    Pete hegseth rebranded it. Seriously. America is a joke right now

    • int_19h 9 hours ago

      To be fair, it's probably the most sensible thing this administration has done - the new/old name is simply more accurate.

      • bdangubic 8 hours ago

        absolutely. probably not just most sensible but the only thing this administration did right :)

with 15 hours ago

the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.

maelito 14 hours ago

> to defeat our autocratic adversaries.

I'm not sure who's targeted here. The folks that want to invade the EU ?

  • Havoc 14 hours ago

    That dual meaning stood out to me too

piokoch 13 hours ago

This is comical.

"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"

Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.

  • rustyhancock 13 hours ago

    Perhaps Anthropic thinks it can provide a local model that classifies surveillance targets as red blooded Americans.

fnordpiglet 17 hours ago

I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.

Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.

rustyhancock 14 hours ago

Surely this is a powerful signal to divest from Anthropic if you don't live in the US? There's a lot of here's what we support you do to foreigners but no way can you do it in the US?

I can never tell how much of this is puffery from Anthropic.

I do think they like to overstate their power.

Teodolfo a day ago

If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.

aichen_tools 16 hours ago

The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.

atleastoptimal a day ago

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

gdiamos 20 hours ago

This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.

You may not agree with it, but I appreciate that it exists.

claud_ia 8 hours ago

The framing around AI autonomy in national security contexts is genuinely new territory. What's interesting from an agent design perspective is the underlying question: how much should an AI system push back on institutional structures vs. defer to human oversight chains? The soul spec approach -- where the AI internalizes safe behavior rather than just following rules -- might be more relevant here than it first appears.

motbus3 11 hours ago

The fact that someone wants fully autonomous weapons and mass surveillance should be a concern.

Every trigger pressed should have its moral consequences for those who push the trigger.

elif 10 hours ago

Yes nothing says "safety of American democracy" like building custom models for spies to know everything about everyone

noduerme 18 hours ago

This is at best a superficial attempt to show that Anthropic objects to what is already in play.

Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.

maxdo a day ago

Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.

epolanski 12 hours ago

Not gonna lie, regardless of what Anthropic does, it is quite scary we're heading full steam to mass surveillance and wars fought by semi-autonomous machines.

  • eternauta3k 12 hours ago

    Mass surveillance is already here, and they can already use open models to do 80% of what they were planning to do with Claude.

haute_cuisine 12 hours ago

Can someone explain why Dario is making a public statement about this? It's also interesting that they use abstract we / they without putting exact names.

  • moffkalast 11 hours ago

    It's free positive PR, why wouldn't he?

joseangel_sc 8 hours ago

good from them, but dario does not miss a beat to hype this tech, llms are perfect for mass surveillance and i want to the laws to change to prohibit this, but llms and full autonomous weapons have very little to share

giwook 18 hours ago

I commend Anthropic leadership for this decision.

I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).

dylan604 a day ago

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.

halis an hour ago

Don't worry, Grok will break the picket line and come in as a scab. Elon would fuck his mother for a nickel.

DaedalusII 21 hours ago

They made it easy to generate powerpoint presentations, that is the real reason DoW is using them

this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool

oxqbldpxo a day ago

It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.

  • scottyah 21 hours ago

    Why? They clearly are very aligned on the objective, just doing some negotiation regarding the means. Giving up just because you don't agree 100% is not very constructive. This might seem bad for conflict-adverse people who usually are involved in low-stakes negotiations, but it's just the start of things for people who are fluent in conflict.

  • mhjkl 20 hours ago

    Because as we all know the EU would never try using AI for mass surveillance /s

    • pell 13 hours ago

      So far, the EU's track record on privacy is definitely a lot better though. Not saying it'd always stay that way of course.

placebo 15 hours ago

Grok's thoughts on the matter:

"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."

It also acknowledged that this is not what is happening...

  • LightBug1 11 hours ago

    Ergo, those running Grok don't ... have that kind of spine.

paraschopra 19 hours ago

I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.

Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?

A clarification would help.

protocolture a day ago

Classic seppo diatribe.

"We will build tools to hurt other people but become all flustered when they are used locally"

  • joemi a day ago

    If you're using "seppo" as the Australian pejorative referring to Americans, I'm not sure what makes this uniquely American.

    • exodust 19 hours ago

      "Seppo" is rarely used in Australia today, it's an old bottom-of-barrel word most have never heard of. The neutral "Yank" is more common, but even that only pops up sometimes.

      Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!

wosined 14 hours ago

So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.

geophile 21 hours ago

I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.

phgn 14 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Was this written by the state department?

How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?

  • michaelsshaw 14 hours ago

    The entire article is very American-brained.

    • pell 13 hours ago

      The emphasis of "domestic" surveillance is definitely concerning.

michaellee8 a day ago

Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates

  • jdthedisciple a day ago

    Just visibly wave the US flag and you'll be fine, don't worry.

  • knfkgklglwjg a day ago

    Soon it will select targets in commie countries though, perhaps it already does. Who selected to bomb Chavez mausoleum btw?

karmasimida 20 hours ago

Label them as supply chain risk and move on. Enough of this drama already

  • danavar 19 hours ago

    I think they are negotiating until Friday, but I agree. I think this was foolish.

andy_ppp 12 hours ago

Fair play, I’ll move to Anthropic then… don’t love the UI but maybe I can code my own up.

pgt 6 hours ago

The US govt & Hegseth are in a pickle, because if they blackball Anthropic, they will become more powerful than govt. could ever imagine, because it would be the greatest PR any frontier model could ever hope for.

It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.

brgsk 4 hours ago

Big W for anthropic

zmmmmm 21 hours ago

I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:

> importance of using AI to defend the United States

> Anthropic has therefore worked proactively to deploy our models to the Department of War

So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.

You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.

  • 8note 19 hours ago

    it hasnt actually been renamed though.

    the name is still the department of defence by law. department of war is a subheading tagline

not_that_d 15 hours ago

What is with the amount of comments talking about other countries in Europe "Doing the same"?

shevy-java 7 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies

I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.

Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.

noupdates 21 hours ago

Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.

morgengold 11 hours ago

Hey Anthropic, come to europe. We ll find you a building.

statuslover9000 a day ago

The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.

All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.

  • cthalupa a day ago

    Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.

    But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.

    I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.

    • nl 18 hours ago

      I think a lot of the conflict about what imperialist policies means is different framing.

      For better or worse, inside this the border in this map China has fairly imperialist policies. Outside it not so much: https://en.wikipedia.org/wiki/Map_of_National_Shame

      That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.

      But it's an important point when considering China's place in the world.

      • cthalupa 17 hours ago

        We're talking about the modern world, though. China's imperialism over the past half century is not significantly different from any other major world power. The choices we have aren't 1500s Spain or 1700s Britain vs. 2000s China.

        And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.

    • teyopi a day ago

      > But China has some of the most imperialist policies in the world.

      Citation needed?

      US and allies have invaded or intervened in 20+ countries in last 20 years in the name of "western values" where values means $$$$ and hegemony.

      Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?

      • ninjagoo 17 hours ago

        > Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?

        Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.

      • cthalupa 21 hours ago

        Tibet Hong Kong / Macau Taiwan Everything constantly in the South China Sea Belt and Roads is effectively the Marshall Plan but even bigger - Africa being the major example, but also Eastern Europe, parts of the middle east, etc. Over 100 countries. This exact playbook is what sets up the infrastructure and reasons for military intervention at a later date - protecting your investments.

  • chipgap98 a day ago

    In what world does China have a non-imperialist foreign policy?

    • statuslover9000 a day ago

      For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?

      Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?

      Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.

      • 8note 19 hours ago

        why use 40 years as the example? its a pretty convenient framing to exclude the foreign governments its toppled. eg. tibet.

        the government in exile remains the government in exile.

        youd have some standing if china dropped control over its imperial holdings, rather than pretend theyre part of china

        • statuslover9000 17 hours ago

          First off, I consider the post-Mao / starting with Deng era of Chinese government to be the most relevant when considering who they “are” as a country now.

          However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...

          Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.

    • hrn_frs a day ago

      Historically speaking, he's right. China has never had an expansionist foreign policy.

      • mobilefriendly a day ago

        Tibet, the Philippines, and Taiwan would like to have a word, not to mention Chinese military action in support of its North Korea puppet state, and wars with Vietnam and India.

      • dpedu 7 hours ago

        Nine-dash line?

      • sinuhe69 21 hours ago

        Are you serious? Don't you know how many wars did China wage? It tried to assimilate Vietnam for 1000 years. The last large scale war against Vietnam was just 1979. In fact, China had started war with all its neighbors, with no exception.

        • MiSeRyDeee 16 hours ago

          Do me a favor and name one single country didn't have war with any of its neighbor.

    • MiSeRyDeee a day ago

      In what world does China have a imperialist foreign policy?

      • cthalupa a day ago

        The one we live in, where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?

        The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?

        The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?

        The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?

        The one we live in, where there is constant threat to Taiwan?

        It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.

        • MiSeRyDeee 19 hours ago

          I'm well informed on all of these but no, if we compare to other global power like US or Russia, or historically British, France, Spain, etc, China is 100% not an imperialist or colonialist, not by a large margin. Those issues are largely exaggerated by media and anyone had a decent exposure to history and international politics wouldn't say they are the same.

          • asciii 18 hours ago

            I disagree on China. What would you call China's behavior[1] in the South China Sea with regards to fishing vessels and other non-military boats?

            [1] https://www.youtube.com/watch?v=hzZrcqf826E

            • MiSeRyDeee 16 hours ago

              Sure China has some disputes with neighboring country in South China Sea, the worst conflict they had is fishing boats running into each other. 0 death toll last time I checked. Meanwhile US killed at least 126 people with alleged drug strike in the Caribbean Sea since last year, WITHOUT trial. Anyone believing these're equivalent imperialism activity is hypocrite at best.

              [1] https://apnews.com/article/boat-strikes-military-death-toll-...

              • asciii 7 hours ago

                There were deaths in these fishing incidents[1].

                > Anyone believing these're equivalent imperialism activity is hypocrite at best.

                In terms of equivalence, I would say based on their intentions they wish they could be more but would rather let the US burn it on the way down

                [1] https://www.cnn.com/2023/10/03/asia/philippines-south-china-...

                • MiSeRyDeee 4 hours ago

                  Are we just make accusations based on what could have happened? And Still no, day and night difference compared to any of those countries I mentioned

            • maxglute 12 hours ago

              Obviously self defense with nobel peace price worthy restraint.

              Considering it's PRC claimed territory. Literally 100% of PRC claims are inherited from ROC, i.e. PRC has expanded no claims, and actively settled 12/14 land borders (most on earth) essentially all with 50%+ concessions, i.e. PRC ceded more land in negotiations. That OBJECTIVELY, makes PRC the most benevolent rising power in recorded history. Any gov losing land to so many border settlements is committing treason. Also note PCA ruling is not international law, so what PRC does in SCS is not even legally wrong (as in they legally can't be wrong since UNCLOS cannot rule on sovereignty). Or that PRC was last to militarize SCS islands (except Brunai who is good boi), and PRC conceded ROC/TW's original 11dash to 9dash, which even in SCS disputes makes PRC the only party to have made concessions.

              PRC is objectively the LEAST imperialistic rising power, by actual non retarded definitions, i.e. expanding on territories outside it's claims, that PRC didn't even make, but again inherited from ROC when UN recognition changed.

            • jmyeet 17 hours ago

              What China is doing in the South China Sea? The South China Sea.

              Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.

              This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.

              But sure, let's focus on China militarizing its territorial waters.

              [1]: https://en.wikipedia.org/wiki/Monroe_Doctrine

              • cthalupa 16 hours ago

                You're arguing that because of the English language name of it is the South China Sea that China owns it and their actions can't be imperialist?

                Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!

                This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.

                • MiSeRyDeee 16 hours ago

                  And the US just casually carried out a special military operation in another sovereign country and captured their president without consequences. So much for self-righteous.

              • yakshaving_jgt 12 hours ago

                > What China is doing in the South China Sea? The South China Sea.

                Sorry, did you mean East Vietnam Sea?

        • mobilefriendly a day ago
          • cthalupa a day ago

            > where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?

            Was referring to Tibet.

            The Uyghurs are also a major problem from a social perspective but not directly related to imperalism/expansionism/military industrial complex stuff.

            • econ 19 hours ago

              Yes but the guy at the end of the street beats his wife too!

        • cwillu a day ago

          “One country two systems” is definitionally not imperialism, and given that “One China” is still an internationally recognized thing, neither is Taiwan. “Imperialism” is not a synonym for “morally repugnant government policy”.

          • cthalupa a day ago

            I can see the argument for Hong Kong. I don't agree, really, but I can understand it. Under the strictest of definitions, perhaps it isn't.

            But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.

            I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.

            But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.

  • soundworlds a day ago

    100% agree. Any AI org that is that tied to a single nation's interest can only be detrimental in the long run.

    I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.

  • xeckr a day ago

    I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.

  • hackyhacky a day ago

    > China’s non-imperialist foreign policy

    Really? Is China non-imperialist regarding Taiwan and Tibet?

    • jmyeet a day ago

      Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?

      Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.

      It is 100% factually accurate to say that the People's Republica of China is not imperialist.

      [1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...

  • 8note 19 hours ago

    the treatment of Tibet and Xinjiang are entirely Han imperialism and colonisation.

    the one china policy is imperialism

  • nutjob2 a day ago

    > China’s non-imperialist foreign policy

    This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.

    Your comment is ridiculous. It reads like satire.

    • cwillu a day ago

      It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.

      Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.

      • 8note 19 hours ago

        that claim is really about not resuming a war.

        taiwan saying otherwise would immediately trigger an attack from the PRC.

        its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate

      • nutjob2 20 hours ago

        While that rhetoric makes sense in the context of the history and politics of China and Taiwan, they have been independently governed nations for quite a while and have very different political systems, their own armies, etc. They are de-facto separate nations if nothing else.

        I also note China's aggressive and violent colonization and expansive claims of the South China Sea.

        Taking any nation/land/sea by force is imperialist, by definition.

    • jmyeet a day ago

      Your comment reads like propaganda.

      You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.

      The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.

      And those islands you mention are in the South China Sea.

      • 8note 19 hours ago

        that is still imperialism: taking control of a colony and forcing a certain culture on its inhabitants

anduril22 a day ago

Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.

JacobiX 13 hours ago

>> We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party

You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.

gerash 15 hours ago

I respect the Anthropic leadership for not being greedy like many others

sirshmooey a day ago

Party balloons along the southern border beware.

lvl155 a day ago

At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.

jonplackett a day ago

That is frikkin impressive. Well done sir.

lzbzktO1 15 hours ago

"These latter two threats are inherently contradictory"

After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."

dzonga 21 hours ago

these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.

the Chinese are releasing equivalent models for free or super cheap.

AI costs / energy costs keep going up for American A.I companies

while china benefits from lower costs

so yeah you've to spread F.U.D to survive

  • andxor 20 hours ago

    The models are hardly equivalent.

alldayhaterdude a day ago

I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.

Reagan_Ridley a day ago

I restored my Max sub. I wish they pushed back more, so I went with $100/month only.

stopbulying a day ago

Didn't Cheney's company have the option to bid on contracts, by comparison?

  • stopbulying 11 hours ago

    Cheney (Chevron, Halliburton, Kellogg Brown & Root (KBR)) did not have a qualified blind trust (QBT) while Vice President.

    Cheney's office touched the presentation presented by Gen. Colin Powell which led Congress to believe that there was need to invade Iraq to save US from WMDs. Tours of duty were extended from 3 months to 24 months because "stop loss". Subsequently, the United States paid out trillions for debt-financed war and some $39 billion to Cheney's company KBR.

    Today you learned that the oil company Cheney worked for (Chevron) was trying to bully Afghanistan into a pipeline deal in 1998 and also in 2001.

    Cheney donated less than $10 million dollars of his Haliburton/KBR returns; mostly to a heart medicine program in his own name and retained a compensation package.

    • stopbulying 11 hours ago

      What does Anthropic need to do to retain control over their for-peace company, though they took money from DoD/DoW?

SamDc73 21 hours ago

Didn't Dario Amodei ask for more government intervention regarding AI?

angelgonzales 18 hours ago

Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.

haritha-j 14 hours ago

Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.

mkoubaa 21 hours ago

>We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Implying other civilians can be put at risk

kumarvvr 21 hours ago

All this is for nought.

The power lies with the US Govt.

And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

Ultimately, Anthropic will fold.

All this is to show to their investors that they tried everything they could.

  • mylifeandtimes 20 hours ago

    It is not clear to me that the power here lies with the US Govt.

    Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?

    How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.

    Oh, and that includes Palentir, who is deeply embedding in the govt.

    Side example: remember the 6 congresspeople who made the video about military orders? They won.

  • techblueberry 19 hours ago

    Anthropic probably can’t fold, they might lose an existential number of researchers if they did. This is literally an unstoppable force meets an immovable object situation.

    Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.

2001zhaozhao 21 hours ago

Congratulations, you just got a new $200 Claude Max plan customer.

chrismsimpson 15 hours ago

The call is coming from inside the house

w10-1 12 hours ago

We are all assuming Anthropic can elect not to do a deal with the Pentagon, and put conditions on it.

But Hegseth and Trump are abusing federal powers at a rapid clip.

I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.

(Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)

adamgoodapp 20 hours ago

It's ok to mass survey foreign entities.

gizmodo59 a day ago

They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.

  • IG_Semmelweiss a day ago

    Yes, but also remember where they came from.

    They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.

    Claude was just being the little bot that could, and until now, flying under the radar

  • reasonableklout 17 hours ago

    It's much more than a few million? Being declared a supply chain risk means that no company that wants to do business with the government can buy Anthropic. And no company that wants to do business with those businesses can buy Anthropic either. This rules out pretty much all American corporations as customers?

m101 a day ago

I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?

They get to look good by claiming it’s an ethical stance.

seydor 19 hours ago

Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced

buellerbueller 8 hours ago

It isn't the Department of War; only Congress can change the name, and it hasn't.

impulser_ a day ago

The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.

  • mattnewton a day ago

    I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.

  • techblueberry a day ago

    Apparently part of this whole battle is because Grok isn't up to part to be an acceptable alternative.

  • ternwer a day ago

    As far as we can tell, OpenAI and Google seem to be ok with it and not resisting. It would be easier for Anthropic's cause if they did.

  • alangibson a day ago

    Yea but every warfighter will get a waifu

  • popalchemist a day ago

    It's better than actively aiding them. Make them struggle at every turn.

    • impulser_ a day ago

      Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.

      • mikeyouse a day ago

        This of course raises the question on whether as an American I have more to fear from the Chinese government or the US one.. given everything happening in the Executive Branch here, that’s a disappointingly hard question to answer.

        • impulser_ a day ago

          I think that's an easy question to answer, but obviously you don't fear the Chinese government you're not a Chinese citizen. You can actively talk about your disagreements with the US government, that not a right the Chinese have.

          • popalchemist 18 hours ago

            Can you? By ICE agents' own admission on video, they have been adding people to "domestic terrorist" watchlists (just for verbally dissenting, making recordings with a phone, etc) which are then used by Palantir to disappear people directly from their homes - even US citizens. Palantir, the CEO of which gleefully admits to knowing many Nazis and seems to get off on the fact that his software "kills people" (direct quote).

        • krapp a day ago

          >that’s a disappointingly hard question to answer

          It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.

          What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?

          To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.

          • popalchemist 18 hours ago

            This is the correct take. It may be a different question for people living within China, but for Americans, the US Gov is a direct threat to their lives.

      • int_19h 9 hours ago

        > Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.

        They already have the best and most expensive toys in the world, and they mostly seem to be waging aggressive wars with them. Perhaps if the toys weren't so shiny and didn't make it all so one-sided, they wouldn't?

      • GolfPopper a day ago

        If the American military was focused on defending the United States, it would be a very different beast. The 21st Century American military is a tool for transferring wealth from the public to influential parties, and for inflicting destruction on non-peer nations who pose obstacles to influential parties interests. Defending the United States against various often-invoked hobgoblins is at best a very distant concern, closer to pure lip service than reality.

      • 8note 19 hours ago

        but the "people defending you" have been commiting clear and obvious war crimes?

      • Jolter 17 hours ago

        The Department of War under Trump has proven itself to not be interested in defending you, the American people. All they’ve done so far is aggression against foreign supposed adversaries.

      • georgemcbay a day ago

        I'm a natural-born American (many generations back) and firmly believe that if we ever get into a hot war with China, it will be because of American provocation, not Chinese.

      • popalchemist 18 hours ago

        I am American born and raised and I consider our current government mass murderers who I trust as much as I would have the Nazis. It was a good thing that the Nazis did not get the a-bomb before us, and the same principle applies here. The fewer magnifiers of their power the better. They are a scourge on human rights, and the world.

  • klooney a day ago

    Grok in unhinged mode piloting an Apache, what could go wrong.

FrustratedMonky 9 hours ago

This also helps build Anthropic hype.

There are military officials saying they need anthropic because it is so good. They can't live without it.

All of this really helps Anthropic.

Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.

siliconc0w 20 hours ago

Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.

I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.

alach11 a day ago

A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.

What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?

  • easton a day ago

    It’s not unusual for legal departments to take offense to these sorts of things, because now everyone using Claude within the DoD has to do some kind of audit to figure out if they’re building something that could be construed as surveillance or autonomous weapons (or, what controls are in place to prevent your gun from firing when Claude says, etc). A lot of paperwork.

    My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.

    • mwigdahl a day ago

      It's that, as I understand it. Anthropic is the only vendor certified to run its models on DoD/DoW classified networks.

  • cmrdporcupine a day ago

    Same reason they cut funding for universities that had DEI mandates, etc. and made a big spectacle of doing it despite it often being very little money etc. etc.

    It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.

    He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.

  • SpicyLemonZest a day ago

    He pushed the issue to an ultimatum because he is an unqualified drunk, and thinks that it's against the law for anyone to try and stop the US military from doing something they want to do. This isn't an isolated issue; he tried to get multiple US Senators prosecuted for making a PSA that servicemembers shouldn't follow illegal orders.

  • tabbott a day ago

    What makes you want to believe the Trump Administration when it claims it doesn't want to do domestic mass surveillance?

10297-1287 a day ago

They want to be nationalized, which is the most profitable exit they'll ever get.

ethagnawl 20 hours ago

The official name of this organization remains _The United States Department of Defense_.

anonym29 a day ago

Anthropic has already cooperated too much with the US Intelligence Community, but better some restraint than none, and better late than never.

lynx97 11 hours ago

With all this talk about AI and autonomous weapon systems. It seems like one of John Carpenters first movies, and my favourite B-movie, is coming back strong!

Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...

huslage a day ago

It is not the Department of War. He's towing the line from the get-go. Forget this guy.

DudeOpotomus 8 hours ago

It's never wrong to do the right thing.

Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.

Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.

worik 13 hours ago

Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?

coolca 21 hours ago

Imagine being so cautious with your words, only to have 'Department of War' in your title

verisimi 16 hours ago

It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:

> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.

> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.

kittikitti 6 hours ago

I simply don't trust any of their moral posturing when they've never provided open-weight models and don't have any intention of doing so. Anthropic continuously makes hypocritical statements on safety and ethics. They made their bed with the U.S. government, and now they don't want to sleep in it.

IAmGraydon 19 hours ago

They should try Sam Altman. He's just the kind of guy who would bend over for this kind of authoritarian demand.

insane_dreamer 19 hours ago

Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.

mrcwinn 20 hours ago

I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.

bamboozled a day ago

Move your company out of the USA?

pousada a day ago

Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh

  • baggachipz a day ago

    Well then I don't know where you've been for the last ~10~ ~20~ 70 years

  • mwigdahl a day ago

    When? Its entire history from the foundation of the Republic to 1947. The name was changed after WWII; now a faction wants to change it back. The difference in name never changed the behavior, in either direction.

  • darvid a day ago

    I'm 33 years old, would you mind telling me which year you thought this was, force of good stuff? might be before my time

    genuinely curious, I got nothing

    • mylifeandtimes 20 hours ago

      it was before your time.

      In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.

      After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.

      And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.

      Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.

      But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.

    • phtrivier a day ago

      The USA were pretty clearly on the "better side" of conflicts in 1941-1945, during the Cold War (at least as far as Europe and the Marshall plan was concerned). In Koweït and central Europe during the 90s. You may even argue for Afghanistan post 9-11 (although the state building was botched.) in the 2000s. ISIS is a footnote in history because of US intervention (from Trump first term, of all things.) And Ukraine would not be against getting the support it had in 2022 back under Trump.

      Does not mean that very bad things were not happening at the same time.

      But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.

jwpapi 20 hours ago

Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

I understand the risk, but that is the pill.

  • 8note 19 hours ago

    they could use a different provider for the kill chain.

    we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets

    is a bit ridiculous.

dev1ycan 11 hours ago

This doesn't read too badly, but I still do not believe that ANY AI company is ethical, at all.

ponorin 10 hours ago

As a non-American they've lost me already at the first sentence.

United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.

This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.

And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.

ThouYS 13 hours ago

this is.. a nothing burger? they don't exclude working for autonomous weapons, nor do they exclude mass surveillance. so what gives?

nova22033 20 hours ago

Why does DoD need claude? I thought xAI was "less woke" and far better than claude

marshmellman 19 hours ago

Well, now if DoD moves to another AI provider, we’ll know what was compromised.

Aeroi 18 hours ago

in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.

int32_64 a day ago

Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.

This is why people should support open models.

When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.

narrator 17 hours ago

I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."

Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.

parhamn a day ago

Now, I'm curious. How Bedrock/Azure Claude models work?

Do these rules apply to them too?

gnarlouse 17 hours ago

huge if true.

they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.

jijji 20 hours ago

the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].

[1] https://taalas.com/products/

moktonar 16 hours ago

Well fucking done. Anthropic has just gained the “has bollocks” status. Also now we know what the govt is really up to with AI. G fucking g

7ero 14 hours ago

Sound like they're following the google playbook, don't be evil, until the shareholders tell you to.

7ero 14 hours ago

Sounds like following the google playbook, don't be evil, until the shareholders tell you to.

OrvalWintermute a day ago

I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.

Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!

jibal a day ago

It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.

brooke2k a day ago

The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.

We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?

Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.

Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.

There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.

The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.

He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.

And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?

Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.

We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.

And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.

isamuel 19 hours ago

Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.

  • WatchDog 19 hours ago

    Soldier is an Army specific term. Like Sailor, Airman, Marine, etc.

    Perhaps the term you are looking for is service member?

    Warfighter tends to refer to anyone involved in a role that directly supports combat operations, it may or may not be a service member.

ulfw 9 hours ago

Department of War.

What a shit name

lenerdenator 10 hours ago

Nitpick: It's still the Department of Defense, not the Department of War. Don't let the chuds live in their delusional fantasy world.

mrcwinn 20 hours ago

Keep in mind: the government is very invested logistically in Anthropic.

So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.

Because if there were some kind of concession, it would have been simplest just to work with Anthropic.

Delete ChatGPT and Grok.

sneak 10 hours ago

The only reason you ask for these capabilities is because you want to use these capabilities.

That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.

I am not particularly AI alarmist, but these are facts staring us right in the face.

We are so fucked.

delaminator 13 hours ago

Hegseth doesn't need autonomous drones, he's got the Treasury.

keeeba a day ago

Big respect

Total humiliation for Hegseth, sure there will be a backlash

  • techblueberry a day ago

    I thought it was interesting he threw in the bit about the supply chain risk and Defense Production Act being inherently contradictory. Most of the letter felt objective and cooperative, but that bit jumped off the page as more forceful rejection of Hegseth's attempt to bully them. Couldn't have been accidental.

  • calgoo a day ago

    I see it as the opposite, its a lousy excuse of a message trying to get people not to think that they are giving in. Instead they list the horrible uses that they are already helping the government with. Dont worry, we only help murder people in other countries not the US. They also keep calling it the "Department of War" which means that this message is not for "us", its them begging publicly to Hegseth.

    • adi_kurian a day ago

      What would the ideal response have been, in your view?

      • calgoo 13 hours ago

        Well, they should not have made a contract in the first place with a government that we all knew was going to be this bad. They should be doing everything in their power to cancel all government contracts at this point.

    • jpcompartir 13 hours ago

      "Regardless, these threats do not change our position: we cannot in good conscience accede to their request."

      • calgoo 13 hours ago

        Yes, that is great, for people from the US. For people in Europe and other locations, this just proves that they dont really care as the tool is already being used against us. It quite clear to me that anyone outside the US should immediately cancel all contracts with these corporations, as well as work their hardest at blocking their bots online.

        • jpcompartir 7 hours ago

          As a non-US citizen, I'm quite glad in the knowledge that Claude won't be used to kill other non-US citizens with autonomous weapons

delaminator a day ago

"so we'll do it and feel guilty about it"

  • bawis 9 hours ago

    That has been the war politics of the western in the last century or so, nothing new.

jajuuka 5 hours ago

While it's good that they didn't fold, they didn't need to lick the boot that hard. So much spent on "we love the US and democracy and hate communism and the Chinese." They are trying really hard to keep this contract as is, which I think says more than folding to these additional demands.

alephnerd a day ago

One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.

Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.

This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.

Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.

Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.

[0] - https://www.anthropic.com/news/mou-uk-government

[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...

[2] - https://www.anthropic.com/news/opening-our-tokyo-office

[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

  • arduanika 17 hours ago

    I tried several times to read your second paragraph, and failed to parse it. Could you break it into several sentences somehow? It's possible you're making an important point, but I can't tell what you're trying to say.

Bengalilol 14 hours ago

TLDR: « depends on where you live »

jiggawatts a day ago

Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!

This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.

Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.

If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.

tehjoker 20 hours ago

The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.

AI should never be used in military contexts. It is an extremely dangerous development.

Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.

  • 8note 19 hours ago

    ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory

myko 20 hours ago

There is no Department of War. This is the dumbest fucking timeline.

  • myko 8 hours ago

    To be clear, despite the downvotes, my statement is true. It is the Department of Defense. As someone who spent a good portion of my life working under it, it is offensive to me people are going along with the pretense that these idiots can unilaterally rename the organization.

einpoklum 9 hours ago

The first sentence was quite enough:

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.

The main thing that's disappointing is how some people here see him or his company as "well-intentioned".

creatonez 17 hours ago

> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.

It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.

mvkel a day ago

"as an ai safety company, we only believe in -partially- autonomous weaponry"

Ads are coming.

  • ddxv a day ago

    I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions

    • mvkel 20 hours ago

      for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability

OutOfHere a day ago

The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.

  • knfkgklglwjg a day ago

    The best open models are from china though.

    • OutOfHere 21 hours ago

      It's a good reason to fund open model development domestically.

dakolli 20 hours ago

This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.

I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.

Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.

joshAg 21 hours ago

torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus

probably_wrong a day ago

I have read the whole thing but I nonetheless want to focus on the second paragraph:

> Anthropic has therefore worked proactively to deploy our models to the Department of War

This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.

There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.

Disclaimer: I'm not a US citizen.

[1] https://m.youtube.com/watch?v=ToKcmnrE5oY

  • ricardobeat a day ago

    What is their other possible move here, considering the government is threatening to destroy their business entirely?

    • probably_wrong a day ago

      One alternative would be to call the government's bluff: if they truly are as indispensable as they claim then they can leverage that advantage into a deal.

      But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.

    • ninjagoo 18 hours ago

      > What is their other possible move here, considering the government is threatening to destroy their business entirely?

      You must not be American, then. We all know that these corporate favoring contract terms are managed through campaign contributions; savvy?

      Anthropic must have high school interns as govt liaisons, and not very bright ones

  • XorNot a day ago

    Warfighters is a pretty common term though. There's a fair bit of nuance in when and how you'd use it.

    • cwillu a day ago

      It's a common term that comes with a lot of criticism in the vein of noticing the skulls.

0xbadcafebee 19 hours ago

Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.

eigencoder 6 hours ago

Honestly, I don't get it. So many tech companies are happy to do business in China and serve its interests, when it would gladly see them fail. But they won't defend their own country and its interests.

  • raincole 6 hours ago

    Businessmen are not well known for their loyalty.

I_am_tiberius 13 hours ago

I'm still waiting for a proof that they don't use user data (directly or derived) for training.

ozzymuppet 20 hours ago

Wow, I expected them to cave, and they did'nt!

I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.

DiabloD3 15 hours ago

This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.

  • zzot 15 hours ago

    That’s not true anymore. Trump renamed it in September: https://www.war.gov/News/News-Stories/Article/Article/429582...

    • calgoo 13 hours ago

      Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.

      • nla 9 hours ago

        TDS factor 11.

ssrshh 7 hours ago

This is quite the PR stunt. Tech companies can't stop copying Apple

willmorrison a day ago

They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???

I guess they're evil. Tragic.

  • fluidcruft a day ago

    It's not inconceivable that AI could become better than humans at targeting things. For example if it can reliably identify enemy warcraft or drones faster than people can react. I'm not saying Claude's models are suited for that but humans aren't perfect and in theory AI can be better than humans. It's not currently true and would need to be proved, but it doesn't seem unreasonable. It could well be better than something like deploying mines.

    • shevy-java 7 hours ago

      Indeed. The AI will decide who has to die and who may live.

      Skynet in Terminator was scary. The AI Skynet is even scarier - and sucks, too.

  • micromacrofoot a day ago

    We're living in a time where most tech companies are donating millions of dollars to the current leadership in exchange for favors.

    In that climate this is a more of a stand than what everyone else is doing.

zkmon 10 hours ago

Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.

toddmorrow 7 hours ago

his dilemma wasn't moral. he has none. it was a marketing snafu. he marketed anthropic as different when the cost of claiming that was zero. now there's a cost, and he immediately changes his tune. his statement was essentially "why refrain from building killing machines when no one else is refraining? why limit ourselves unilaterally?" duley proves he never had morals in the first place.

  • MWParkerson 6 hours ago

    Nobody said anything like that in the linked post not sure what you're on about

nla 9 hours ago

I truly do not understand why anyone thinks serious work can be done with their models, let alone government work. Their models do no hold a candle to Open AI.

caerwy 6 hours ago

His real beef seems to be with “any lawful use”. He doesn't agree with the law and wants to only sell to customers who agree with his own moral code. I respect his moral choice but suspect this is not how a market economy ought to work. He ought to lobby government to change the law rather than make moral judgements about his customers.

  • fooker 6 hours ago

    When you make the market, you too can dictate how a 'market economy' ought to work :)

  • muddi900 6 hours ago

    Free Speech rights mean not being compelled to act against your moral code.