Anthropics-post-Pentagon-resistance-surge-is-fading.jpeg

Anthropic’s post-Pentagon resistance surge is fading

Anthropic’s viral moment is waning.

Business Insider reviewed data from multiple tracking firms, which show that interest in the AI startup has begun to plateau following a remarkable surge when Anthropic refused to back down amid a contract dispute with the Pentagon.

Data from Appfigures, an app intelligence firm, shows that ChatGPT overtook Claude in estimated daily downloads on the US Apple and Google Play stores earlier this month, a contrast from the brief window where Anthropic reigned supreme.

Anthropic’s rise coincided with OpenAI CEO Sam Altman and the AI giant taking a major hit after what Altman described as a “rushed” deal with the Pentagon, just hours after Defense Secretary Pete Hegseth said he would formally label Anthropic as a national security risk.

Kara Lee, a brand and digital analyst at Sensor Tower, a market intelligence firm, wrote to Business Insider that Claude’s daily download rate “has largely plateaued, averaging a 2% DoD (day-over-day) decline” as of March 25. At the same time, downloads of ChatGPT have increased 1% day over day.

The news isn’t all bad for Anthropic. The data shows that interest in Claude is still far above where it was nearly two months ago.

In early February, Claude wasn’t even in the top 40 of the most downloaded free apps on Apple’s US App Store. As of Friday afternoon, Claude ranked No. 2, behind ChatGPT. And the number of daily active users of Claude has continued to tick up, according to data from Similarweb.

Lee said that Anthropic has seen a 166% increase in daily downloads compared to February, as of March 25. ChatGPT was down 4%.

Despite widely-regarded AI models, OpenAI has long dominated the consumer space after it jump-started the generative AI race with the release of ChatGPT in November 2022. Instead, Anthropic has zeroed in on the enterprise market, which Amodei has previously described as a more reliable business model.

“I think we have a better business model. I think we have better margins,” Amodei said during The New York Times’ DealBook summit in December. “I think we’re being responsible about it. But again, let’s say you have a different business model. Let’s say you have a consumer business model, youre kind of — the source of your revenue isn’t as good, your margins are uncertain.”

The Pentagon flap provided an unexpected opening back into the mainstream. People wrote “Thank You” messages in chalk outside Anthropic’s headquarters. Katy Perry published a screenshot on X of her new Claude Pro subscription, complete with a heart. The viral moment came after advances in Anthropic’s AI models and new coding and business-focused AI that triggered a sell-off in software stocks.

Just as consumer interest was growing, Anthropic officials expressed immense concern about their enterprise business. In legal filings, the company said billions could be lost if the national security risk designation were to remain.

Anthropic sued the Pentagon and the Trump administration to block Hegseth’s decision to formally label Anthropic as a “supply chain risk.” Separately, President Donald Trump ordered all federal agencies to cease using Anthropic’s products within six months. Despite the current tensions, Anthropic is still trying to work with the federal government. The Washington Post reported that Claude is being used to help the US military carry out strikes in Iran.

On Thursday, a federal judge in San Franscio temporarily blocked the Pentagon from labeling Anthropic as a national security risk. US District Judge Rita Lin put her ruling on hold for the week to give the Justice Department time to appeal.

Anthropic is also showing some strain from its growth. Earlier this week, the AI company tweaked its usage limits to try to level out demand during peak hours.

“I know this was frustrating,” Thariq Shihipar, who works on Claude, wrote on X. “We’re continuing to invest in scaling.”

Madison Hoff contributed graphics for this story.




Source link

Lloyd Lee

Anthropic’s lawyer says government is ‘pressuring’ companies to ditch the AI startup, go to competitors

Anthropic’s lawyer said the US government is “pressuring” the startup’s customers to go to rival AI providers amid an escalating fight between the Claude developer and the Department of Defense.

During a status conference on Tuesday, Michael Mongan, an attorney for Anthropic, said the Defense Department’s decision to effectively blacklist the startup from working with the US military is bringing “real and irreparable harm” to the company each day.

Mongan said customers have begun “expressing doubt” about working with Anthropic and that the government has been on a pressure campaign to get Anthropic’s customers to drop the provider and go to competing AI companies.

“We’ve had university systems and business-to-business companies that have switched to competing AI companies,” Mongan said. “And this is all the predictable result of the defendant’s actions and the uncertainty they’ve created, as well as the fact that defendants have been affirmatively reaching out to our customers and pressuring them to stop working with Anthropic and switch to other AI companies.”

Last month, after contract negotiations with the AI startup fell apart, Defense Secretary Pete Hegseth announced that Anthropic was a “supply chain risk” and framed the move as extending beyond direct military work.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth said in an X post on February 27.

The scope of the supply chain risk label is in dispute. Microsoft previously told Business Insider that its lawyers concluded the company can still use Anthropic for non-military-related work. The company also filed an amicus brief, urging the federal court to temporarily block the government’s supply chain risk designation.

The issue centers on Anthropic’s stance that its frontier model, Claude, cannot be deployed for autonomous weapons and mass surveillance of US citizens. Defense officials have said in response that a private company cannot dictate what the military can and cannot do.

Anthropic CEO Dario Amodei said in a blog post on February 26 that the company could not accede to the government’s demand for unrestricted, lawful use of its model. A day later, Hegseth formally designated Anthropic a supply chain risk.

Anthropic sued the government on Monday, seeking a temporary restraining order to continue doing business with the government as the case proceeds. The company said in the suit that the Defense Department did not provide adequate grounds to label it a national security risk.

In addition, the company said the designation has never been applied to an American company and that the move was retaliatory, violating the company’s First Amendment rights to express its views on AI safety and limitations.

The fallout from Anthropic’s blacklisting has been swift, according to legal filings.

Krishna Rao, Anthropic’s chief financial officer, said in a declaration filed on Monday that the DoD had contacted several “portfolio companies about their use of Claude” and that those clients have “grown worried and uncertain” about their ability to use the model.

The CFO said the government’s action could reduce Anthropic’s 2026 revenue by “multiple billions of dollars.”

Spokespeople for Anthropic and the Pentagon, as well as Anthropic’s lawyer, did not respond to a request for comment.




Source link

Microsoft-says-Anthropics-products-can-stay-on-its-platforms-after.jpeg

Microsoft says Anthropic’s products can stay on its platforms after lawyers ‘studied’ the Pentagon supply chain risk designation

Microsoft said Anthropic’s AI tools aren’t going anywhere on its platforms despite the Pentagon blacklisting the startup.

The Pentagon on Thursday formally told Anthropic that “the company and its products are deemed a supply chain risk, effective immediately.” Defense Secretary Pete Hegseth has said the designation effectively bars companies with defense contracts from doing business with Anthropic.

Anthropic has said it plans to challenge the decision in court.

The designation follows a dispute between the AI startup and the Pentagon over how its Claude models could be used. Anthropic has said it will not allow its technology to be deployed for mass domestic surveillance or fully autonomous weapons.

A Microsoft spokesperson told Business Insider on Thursday that the company’s “lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers.”

Claude will still be available to customers through platforms such as M365, GitHub, and Microsoft’s AI Foundry, except for the Department of War, the spokesperson said in a statement.

“We can continue to work with Anthropic on non-defense related projects,” it added.

Microsoft has deepened its ties with Anthropic in recent months. In November, the companies said that Anthropic would spend $30 billion on Microsoft’s Azure cloud services, while Microsoft agreed to invest up to $5 billion in the startup.

Microsoft also said in September that it was integrating Anthropic’s models into Microsoft 365 Copilot alongside systems from OpenAI.

The Anthropic-Pentagon saga

In a statement published on Thursday evening, Anthropic CEO Dario Amodei said the company is in talks with the Defense Department even as it is preparing for court.

“I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” Amodei wrote.

However, Emil Michael, a Department of War official, said in a post on X following Amodei’s statement that negotiations are off the table.

“I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI,” Michael wrote.

Amodei also offered an apology in his statement after The Information reported that he had privately blasted the White House in a memo to staff after talks with the Pentagon fell apart.

In the memo, Amodei wrote that the administration disliked his company because he had not offered “dictator-style praise to Trump.”

“Anthropic has much more in common with the Department of War than we have differences,” Amodei said on Thursday.




Source link

Image of Lakshmi Varanasi

Claude hits No. 1 on App Store as ChatGPT users defect in show of support for Anthropic’s Pentagon stance

While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.

Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.

Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of War to deploy AI models in its classified network.

OpenAI’s agreement has left some loyal ChatGPT users uneasy about OpenAI’s ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.

As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple’s App Store, trailing ChatGPT.


A screencap of the app store

BI



Converts have taken to social media to share screenshots documenting their switch.

Pop musician Katy Perry wrote that she was “done” on X, alongside a screenshot of Claude’s pricing page, with a red heart around the $20-per-month “Pro” plan.

Another X user, Adam Lyttle, wrote “Made the switch,” alongside a screenshot of his email inbox with a receipt from Anthropic and cancellation confirmation from OpenAI.

On Reddit’s ChatGPT subreddit, dozens of users say they’ve deleted their accounts and are urging others to do the same.

“Cancel ChatGPT” has become a common refrain online, while some users have taken a more personal tone, saying Altman’s move “crossed the line.”

The agreement hasn’t polarized all AI users, however.

In one Reddit thread, several commenters said the news does not affect their choice of AI model, arguing that Anthropic’s work with Palantir raises similar concerns. In November 2024, Anthropic, Palantir, and Amazon Web Services struck an agreement to provide US intelligence and defense agencies access to Claude models.

After Secretary of War Pete Hegseth said he would designate Anthropic as a “supply chain risk to national security,” Anthropic said it would “challenge any supply chain risk designation in court.”

In his Friday post, Altman said the Department of War had agreed with two of OpenAI’s safety principles.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

By Saturday afternoon, OpenAI published a more detailed description of its contract with the DoW, including the specific language it used surrounding the use of its models for surveillance and autonomous weapons.

On the topic of autonomous weapons, OpenAI said:

The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

On the topic of mass surveillance, OpenAI said:

The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.

While some chatbot users suggested it’s all fair in business, war, and federal procurement, others suggested the Pentagon’s stance may have handed Anthropic a public relations win.

X user Tae Kim joked that Hegseth might need a new title: “Secretary Hegseth Chief of Claude Marketing.”




Source link

Trump-orders-federal-agencies-to-stop-using-Anthropics-technology.jpeg

Trump orders federal agencies to stop using Anthropic’s technology

President Donald Trump says federal agencies won’t be using Anthropic’s technology anymore.

“We don’t need it, we don’t want it, and will not do business with them again,” Trump wrote on Truth Social on Friday.

It comes amid a dispute between the AI giant and the Department of Defense.

Trump said that there would be a six-month phase-out period for departments, including the Department of Defense, that are “using Anthropic’s products, at various levels.”

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

Trump’s announcement comes just a few hours before the Friday evening deadline defense officials had given Anthropic to agree to the military’s terms of use for the company’s frontier model, Claude.

Earlier this week, the two parties came to an impasse over how the military can deploy Claude.

The issue appeared to revolve around two safeguards Anthropic was not willing to drop: mass surveillance of US citizens and autonomous weapons.

Defense Secretary Pete Hegseth had given Anthropic’s CEO Dario Amodei until Friday, 5:01 p.m. Eastern Time to get on board with the military. Hegseth also warned that the government could invoke the Defense Production Act — a wartime law that gives the president broad authority over a company’s resources — and designate Anthropic as a supply chain risk.

Both would be unprecedented moves by the government against an American technology company, experts previously told Business Insider.

On Thursday, Amodei published a blog post stating that the Defense Department had added language to its contract allowing for “any lawful use” of its model.

A source familiar with the negotiations told Business Insider that this language effectively gave the military discretion over how it uses Claude.

The Anthropic CEO said in his post that the company would prefer to continue serving the department but that it could not “in good conscience accede to their request.”




Source link

Elon Musk is in a black suit jacket and a black graphic t-shirt on stage. He is looking to the top left corner of the image.

Elon Musk says Anthropic’s philosopher has no stake in the future because she doesn’t have kids. Here’s her response.


Marc Piasecki/Getty Images

  • Anthropic’s resident philosopher, Amanda Askell, helps shape Claude’s personality and morals.
  • Elon Musk said she’s not qualified because she doesn’t have kids and no stake in the future.
  • Askell had thoughts.

Anthropic famously employs a Scottish philosopher named Amanda Askell.

Her job is to imbue its chatbot, Claude, with a personality and a set of moral guardrails. She is essentially teaching it to be cool and good.

Elon Musk, however, doesn’t think she’s qualified.

“Those without children lack a stake in the future,” Musk posted on X in response to a profile of Askell published by The Wall Street Journal.

The Journal profile does not say whether Askell has kids. Musk, who has imbued his own chatbot, Grok, with a distinct personality, has 14 of them. Musk is known for promoting a brand of pronatalism that’s become popular among Silicon Valley elites.

Askell responded with her trademark dry intellectualism.

“I think it depends on how much you care about people in general vs. your own kin,” Askell wrote. “I do intend to have kids, but I still feel like I have a strong personal stake in the future because I care a lot about people thriving, even if they’re not related to me.”

“I think caring about your children can make you feel invested in the future in a new and very profound way, and I do understand people wanting to convey that,” she added.

The responses to their short back-and-forth were as varied as you might expect on Musk’s social media network. A day later, Askell posted again.

“I’m too right wing for the left and I’m too left wing for the right,” she said. “I’m too into humanities for those in tech and I’m too into tech for those in the humanities. What I’m learning is that failing to polarize is itself quite polarizing.”




Source link

Shuby headshot

Anthropic’s CEO says we’re in the ‘centaur phase’ of software engineering

Dario Amodei has a novel analogy to describe how AI and humans are working together.

On an episode of the “Interesting Times with Ross Douthat” podcast published on Thursday, the Anthropic CEO compared human engineers and AI working together to the mythical horse-and-human combination known as the centaur.

He used chess as an example: 15 to 20 years ago, a human checking AI’s output could beat an AI or a human playing alone. Now, AI can beat people without that layer of human supervision.

Amodei, who cofounded AI lab Anthropic in 2021, added that the same transition would happen in software engineering.

“We’re already in our centaur phase for software,” Amodei said. “During that centaur phase, if anything, the demand for software engineers may go up. But the period may be very brief.”

He said he’s concerned about the “big disruption” entry-level white-collar work would see. The CEO added that it may be unfair to compare this to the shift from farming to factory to knowledge work revolution because that happened over centuries or decades.

“This is happening over low single-digit numbers of years,” he said.

Amodei is among the most prominent voices warning that AI could erase some white-collar work, especially in law, finance, and consulting. In a January essay, he predicted that AI could disrupt 50% of entry-level jobs in the next one to five years.

The leaders of other top AI labs, including Mustafa Suleyman and Demis Hassabis, have made similar comments about advanced AI automating service jobs within the next 18 months.

Execs at some software companies counter that AI would make engineers more productive and that companies would need more of them.

“The companies that are the smartest are going to hire more developers,” GitHub CEO Thomas Dohmke said on a July podcast. “I think the idea that AI without any coding skills lets you just build a billion-dollar business is mistaken.”

Atlassian’s CEO said that as AI advances, people will keep coming up with new ideas for the technology they want, and engineers will be needed to build it.

“Five years from now, we’ll have more engineers working for our company than we do today,” Mike Cannon-Brookes said in an October interview. “They will be more efficient, but technology creation is not output-bound.”




Source link

Henry Chandonnet is pictured

The creator of Anthropic’s Claude Code likes to hire engineers who do ‘side quests’ like making kombucha

Want a job at Anthropic? It might help to get a hobby.

The AI boom is changing the job requirements for an engineer. Not only do they need to have coding skills, but they also must know how to operate vibecoding tools and stay up to date with new AI models.

Anthropic leader Boris Cherny looks for something else: “Side quests.”

“When I hire engineers, this is definitely something I look for,” he said on “The Peterman Pod.”

Cherny’s definition of side quests includes “cool weekend projects,” like someone who’s “really into making kombucha.” It’s a sign that the engineer is curious and interested in other things, he said.

Much of Cherny’s own growth came from his side projects. Cherny is now a key figure at Anthropic. He created Claude Code, a tool that is now popular with engineers across the country.

“These are well-rounded people,” he said. “These are the kind of people I enjoy working with.”

Cherny also said he prefers that his new hires be “generalists.”

He gave the example of an engineer who can code, but is also able to work on product and design. That all-star engineer also seeks out user feedback.

“This is how we recruit for all functions, now,” he said. “Our project managers code, our data scientists code, our user researcher codes a little bit.”

Cherny isn’t alone in pushing for jobs to become more generalist. Figma CEO Dylan Field said in October that AI was causing job titles to merge, resulting in everyone being a “product builder.”

What else is Anthropic looking for? For some time, it monitored whether candidates use AI in their applications.

In May, Business Insider reported that Anthropic asked candidates for certain jobs not to use AI in their written responses so the company could test their “non-AI-assisted communication skills.”

Anthropic changed its policy in July, allowing candidates to seek out assistance from Claude.

For the younger engineers, a job at Anthropic may be hard to come by. In May, CPO Mike Krieger said on “Hard Fork” that he was focused on hiring experienced engineers — and had “some hesitancy” with entry-level workers.

On the podcast, Cherny said that his love of generalists came from his career trajectory. Working at startups since 18, Cherny had to do everything, he said.

“At big companies, you get forced into this particular swim lane,” he said. “It’s just so artificial.”




Source link