Judge-temporarily-blocks-the-Pentagon-from-declaring-Anthropic-a-national.jpeg

Judge temporarily blocks the Pentagon from declaring Anthropic a national security risk

A federal judge has granted Anthropic a major reprieve as the AI company challenges the Pentagon’s effective blacklisting.

On Thursday, US District Judge Rita Lin granted Anthropic’s request for a preliminary injunction to temporarily block the “Presidential Directive” that ordered federal agencies to stop using Anthropic’s technology, and Defense Secretary Pete Hegseth’s decision to formally label the AI frontier model maker as a “supply chain risk.”

Lin also stayed the effective date of the supply-chain designation, meaning that it cannot take place while the injunction is in place.

The decision is a victory for Anthropic and its CEO Dario Amodei, who refused to bow to Hegseth’s demands. It is not immediately clear if the Justice Department will appeal the decision. In the hours after talks with Anthropic fell apart, the Pentagon struck a deal with OpenAI.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said in a statement. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Spokespeople for the Pentagon and White House did not immediately respond to a request for comment.

In court filings, Anthropic officials said the risk designation could jeopardize potentially billions in revenue. If the injunction remains, Anthropic will be able to continue to do business with defense contractors.

Lin wrote in her decision that the injunction does not require the Defense Department to use Anthropic’s products or services.

Many in tech are closely watching the California case, since it tests whether the federal government can use some of its most severe powers to force a major AI company to agree to contractual terms. Microsoft, which filed an amicus brief in support of Anthropic, also said it was concerned about potential repercussions if companies like itself continued to partner with Anthropic.

Ahead of her ruling, Lin grilled the Justice Department over what she said looked like “an attempt to cripple Anthropic.” She said that the Pentagon could have simply discontinued using Claude, but instead, the Trump administration made repeated actions that appeared to be designed to “punish” the company.

“One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said during the hearing. “And specifically, my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.”

Beyond the California case, Anthropic has a separate suit pending in the D.C. Circuit over the supply chain risk designation.

It also remains to be seen how the White House and the broader Trump administration will treat Anthropic beyond the actions Lin’s ruling compels.

During the hearing, Deputy Assistant Attorney General Eric Hamilton repeatedly said that the Pentagon questions Anthropic’s “reliability and trustworthiness.” Hamilton said that defense officials are concerned Anthropic may try to improperly skew its AI models or shut off access.

In recent weeks, Hegseth, who met with Amodei, said the AI startup put “Silicon Valley ideology above American lives.” President Donald Trump decried the “WOKE COMPANY” run by ” Leftwing nut jobs” in a Truth Social post that was also part of the California lawsuit.

“Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump wrote on Truth Social on February 27.




Source link

Lovable-exec-says-big-boys-and-girls-like-OpenAI-and.jpeg

Lovable exec says ‘big boys and girls’ like OpenAI and Anthropic worry her more than other vibe coding startups

Other vibe coding players are not the biggest competition, says one Lovable exec.

“I always worry about the big boys and girls in the world,” Lovable’s head of growth Elena Verna said on a Sunday episode of the “20VC” podcast. “So, OpenAIs, Anthropics, Googles, Apples, more so than our competitors that spring up from the bottom or from sideways.”

This is because the distribution power of these tech giants and frontier labs in the market is unparalleled, she said.

Stockholm-based Lovable was valued at $6.6 billion in a December funding round led by CapitalG and Menlo Ventures. It competes with other vibe coding startups like Cursor, Replit, and Emergent, as well as far bigger and better-funded players, including OpenAI, Anthropic, and Microsoft, that make their own AI coding tools.

Verna, who joined the startup last May after a series of advisory and head of growth stints at various startups, said that in a world where products are becoming increasingly similar, distribution and growth are winning strategies.

“Whoever has the best distribution that is earned, that is competitively defensible, that is sustainable, that is predictable, is going to be the winner in the market,” she said. “I worry about the companies that have that figured out.”

Verna’s comments about competition follow a period of brutal comparisons between products made by vibe coding startups and Anthropic’s Claude Code.

After Anthropic released its latest model, Opus 4.6, founders and developers said on X that they are ditching their expensive Cursor and Lovable subscriptions for Claude Code.

Still, Lovable is going strong.

The Swedish startup’s annual recurring revenue has surged by more than 30%, from $300 million to $400 million in a single month, Business Insider reported. ARR, a key metric to gauge startup performance, refers to the predictable revenue a company expects to generate over a year.

Lovable’s chief revenue officer, Ryan Meadows, told Business Insider that the company plans to more than double its head count by the end of 2026, from 146 to 350 employees.

He added that Lovable, which specializes in making coding user-friendly, sees at least 200,000 new vibe coding projects created each day.




Source link

Katherine Li, West Coast breaking news reporter at the Business Insider.

AI researchers rally in support for Anthropic as company says it risks losing $5 billion in Pentagon feud

Employees at rival companies — including OpenAI — are rallying behind Anthropic as the startup warns its escalating dispute with the Pentagon could cost $5 billion in lost business.

More than 30 researchers from OpenAI and Google, including Jeff Dean, the chief scientist of Google DeepMind, filed a joint amicus brief on Monday supporting Anthropic in its legal battle with the government. The employees signed in a personal capacity and do not represent their companies’ official views.

Their filing argues that the Pentagon’s decision to label Anthropic a “supply-chain risk” could harm the broader US AI industry.

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.

The dispute stems from a breakdown in negotiations between Anthropic and the Pentagon over guardrails around how its AI models could be used, particularly around mass domestic surveillance and autonomous lethal weapons.

Last month, Defense Secretary Pete Hegseth said that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” marking a dramatic expansion of the “supply chain risk” designation.

Anthropic has since sued the government in two courts, arguing the decision violates its First Amendment rights and unfairly retaliates against the company.

In court filings, Anthropic executives warned that the fallout is already hitting the company’s finances. Chief financial officer Krishna Rao wrote in a court statement that hundreds of millions of dollars in expected revenue tied to Pentagon-related work are at risk this year. If the government succeeds in discouraging companies from working with Anthropic more broadly, Rao added, the company could ultimately lose up to $5 billion in sales, which is roughly equivalent to its total revenue since commercializing its AI technology in 2023.

Anthropic’s chief commercial officer, Paul Smith, wrote in a separate court statement that the pressure from the government is causing business partners to take steps that “reflect deep distrust and a growing fear of associating with Anthropic.” Smith added that some customers have paused negotiations or demanded escape clauses, while others have canceled meetings entirely after the supply-chain designation.

The situation has also drawn criticism from industry leaders. OpenAI CEO Sam Altman, despite singing its own contract with the Pentagon after Anthropic’s fell apart, wrote on social media that enforcing the supply chain risk designation “would be very bad for our industry and our country.”

Major cloud providers like Amazon and Microsoft have said they will continue offering Anthropic’s Claude AI models to customers without ties to the Pentagon.

Anthropic is now seeking a temporary court order that would allow it to continue working with military contractors while the legal fight continues. The first hearing could take place in San Francisco as soon as Friday.

The Pentagon did not immediately respond to a request for comment outside normal business hours.




Source link

Pentagon-official-details-the-holy-cow-moments-that-sparked-rift.jpeg

Pentagon official details the ‘holy cow’ moments that sparked rift with Anthropic

The Pentagon’s R&D chief said the Department of Defense was “scared” about Anthropic shutting off access to its AI during a critical moment.

During an appearance on the “All-In Podcast” posted on Friday, Undersecretary of Defense for Research and Engineering Emil Michael detailed two pivotal moments that culminated in the Pentagon formally designating Anthropic as a supply chain risk, effectively blacklisting one of the nation’s largest AI companies.

One of those instances, Michael said, was when Anthropic CEO Dario Amodei suggested that the impasse over how the Pentagon could deploy the AI startup’s models could be bridged with a phone call, even if it came during “a decisive moment.”

“I was giving these scenarios, these Golden Dome scenarios, and so on,” Michael said on “All-In Podcast,” describing President Donald Trump’s signature missile defense initiative.

“And he’s like, ‘Just call me if you need another exception.’ And I’m like, “But what if the balloon’s going up at that moment and it’s like a decisive action we have to take? I’m not going to call you to do something. It’s not rational.”

It’s not entirely clear what Anthropic would object to in the hypothetical Michael said he posed, though the implication is that some Golden Dome systems could have autonomous modes that fire weapons.

In the current US missile defense system, AI’s role is to provide rapid situational awareness and recommendations for human operators. AI could rapidly assess whether a detected launch poses a threat and recommend weapons to destroy it. Decisions on whether to listen to the recommendations are then made by air defense commanders.

Elsewhere in the interview, Michael said that part of the impasse with Anthropic is that he “can’t predict for the next 20 years what all the things we might use AI for.”

Michael, who was previously a top executive at Uber, said the department’s concerns about Anthropic began to escalate after the US conducted a targeted raid on Venezuela to capture Venezuelan President Nicolás Maduro. The assault raised major questions about sovereignty, and congressional democrats questioned the decision not to seek approval for the deployment of US forces.

In the wake of the raid, Michael said that an unnamed Anthropic executive called a Palantir executive to ask whether Anthropic’s AI models had been used to carry it out. The Pentagon accesses Anthropic’s AI models through a government cloud that is operated by Amazon Web Services and then run by Palantir, Michael said. (On February 27, President Donald Trump ordered federal agencies to stop using Anthropic’s AI, though that directive came with a six-month phase-out period.)

Michael said Palantir officials were so alarmed by Anthropic’s questions that they alerted him.

“I’m like, ‘Holy shit, what if this software went down, some guardrail kicked up, some refusal happened for the next fight like this one, and we left our people at risk,” Michael said, alluding to the US’s current war against Iran.

As talks grew heated, Michael said he felt like Anthropic turned the discussion “into a PR game” by publicly raising concerns about how the terms the Pentagon sought would not adequately account for potential misuse. Amodei has confirmed that Anthropic was particularly worried about the risks posed by fully autonomous weapons and how powerful AI models could be abused to spy on American citizens.

During the heated back-and-forth, Michael publicly called Amodei a “liar” with “a God-complex.”

On Thursday, the Pentagon said it formally notified Anthropic that it was declaring the company and its products to be a supply chain risk, the first time in history that label had been applied to a US company.

Amodei responded that his AI startup had “productive conversations” with the Pentagon in recent days, but Michael later said that no discussions were ongoing.

Anthropic has suggested it will challenge the designation in court, especially since Defense Secretary Pete Hegseth has said it prevents any defense contractor from doing business with Anthropic.

Asked about why the Pentagon went so far, Michael said the designation was not “punitive.”

“If their model has this policy bias, let’s call it, based on their constitution, their culture, their people, and so on,” he said. “I don’t want Lockheed Martin using their model to design weapons for me.”

Earlier this week, a Lockheed spokesperson said it would follow Trump and the Pentagon’s direction on whether it would continue to use Anthropic’s products. Michael also called out Boeing, describing how the airplane manufacturer could use Anthropic’s AI for non-defense tasks.

“So, Boeing wants to use Anthropic to build commercial jets — have at it,” he said. “Boeing wants to use it to build fighter jets. I can’t have that because I don’t trust what the outputs may be, because they’re so wedded to their own policy preferences.”

While Michael was critical of Anthropic, he praised xAI and Elon Musk for agreeing to the department’s terms, allowing it to deploy AI “for all lawful uses.”

Michael also praised OpenAI and its CEO, Sam Altman, for working with the Pentagon to quickly stand up another AI system capable of operating in classified settings, so the department can phase out Anthropic.

Altman and OpenAI have received significant blowback online for agreeing to work with the Pentagon. Altman publicly urged the department not to label Anthropic a supply chain risk.

“To his credit, I called him and said, ‘I need a solution if this thing goes sideways. I need multiple solutions. I’d like you to be one of them,” Michael said. “And he’s like, ‘Okay, well, what can I do for the country?’ I was like, ‘I need to get you up running as soon as I can.'”




Source link

Dario-Amodei-says-Anthropic-is-having-productive-conversations-with-the.jpeg

Dario Amodei says Anthropic is having ‘productive conversations’ with the Pentagon despite blacklist

Anthropic CEO Dario Amodei isn’t walking away from the table with the Pentagon, even as his company could sue the Defense Department.

“I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” Amodei wrote in a lengthy statement published on Thursday night.

Amodei’s statement came after the Pentagon confirmed that it formally notified Anthropic that “the company and its products are deemed a supply chain risk, effective immediately.” It means that Anthropic is effectively blacklisted after Amodei and the company refused to acquiesce to the department’s demands.

For all of the talk of reconciliation, Amodei said Anthropic is prepared to sue the Pentagon over the designation. In particular, Defense Secretary Pete Hegseth has said that the effective blacklisting means that any company that has defense contracts cannot do business with Anthropic.

“The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation,” Amodei wrote of the letter the Pentagon sent Anthropic.

Microsoft, which offers Anthropic models to its customers, said it will continue to work with the AI startup

“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry and that we can continue to work with Anthropic on non-defense related projects,” a Microsoft spokesperson said in a statement to Business Insider.

The fight with the Pentagon has sparked interest in Anthropic, making Claude the top free app in major US app stores. Still, the standoff carries risks for the AI startup given its focus on enterprise business.

Amodei said that Anthropic continues to disagree with the Pentagon’s position on the sweeping nature of the ban.

“With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he wrote.

Amodei also offered a public apology after The Information reported that he wrote harshly critical comments about the White House in a private memo to staff after talks with the Pentagon fell apart on Friday. In the memo, Amodei wrote that the administration didn’t like his company because he hadn’t “given dictator-style praise to Trump.”

“It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views,” Amodei wrote on Thursday. “It was also written six days ago, and is an out-of-date assessment of the current situation.”

Claude has shot up in popularity since Friday, but Amodei’s statement makes clear that the AI company wants to de-escalate the situation.

“Anthropic has much more in common with the Department of War than we have differences,” he wrote.




Source link

Anthropic-says-it-cant-in-good-conscience-agree-to-the.jpeg

Anthropic says it can’t ‘in good conscience’ agree to the military’s terms over the use of its AI

Anthropic’s CEO is prepared to walk away from its contract with the military, according to a new statement published on Thursday.

In a blog post, CEO Dario Amodei said that the company “cannot in good conscience accede” to the request of the Defense Department concerning safeguards around its frontier model, Claude.

On Tuesday, Defense Secretary Pete Hegseth gave Anthropic an ultimatum to agree with the military’s terms over the use of Claude or get blacklisted by the government.

Defense officials gave Anthropic until Friday evening to agree to the terms.

The terms were not clarified, but the issue, according to Amodei’s statement, appears to revolve around two red lines Anthropic is not willing to cross when it comes to how Claude is deployed: mass domestic surveillance and fully autonomous weapons.

A spokesperson for Anthropic declined to comment.

Hours before Amodei put out a statement, Sean Parnell, a Pentagon spokesperson, posted on X that the department had no interest in using AI to conduct mass surveillance of US citizens or to develop autonomous weapons.

And several hours after Amodei released his statement, the Department’s undersecretary for research and engineering, Emil Michael, lashed out at the CEO in a social media post.

“It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote on X.

“The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company,” added Michael, who was previously Uber’s chief business officer.

A person familiar with the negotiations told Business Insider the department provided a new proposal just 36 hours before Hegseth’s deadline, and the language around the provisions on mass surveillance and autonomous weapons allowed for “any lawful use” of Anthropic’s AI.

The person said that the additions essentially gave the military to discretion to set aside Anthropic’s red lines and use Claude as it sees fit.

A senior Pentagon official told Business Insider on Tuesday that the department will consider invoking the Defense Production Act — a wartime law that would essentially give the president control over Anthropic’s resources in the interest of national security — and deem the company a supply chain risk.

Both uses of the national authorities would be unprecedented, experts told Business Insider, considering that the levers are being used as a negotiating tactic and against an American company.

“I’m not aware of this ever having been used as a weapon in a negotiating posture,” Dean Ball, an ex-senior policy advisor for the White House Office of Science and Technology Policy, told Business Insider.

Amodei wrote in his blog post that the two threats are “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

The CEO wrote that Anthropic hopes the government will “reconsider” its position on the safeguards and that the company’s preference is to continue working with the military.

“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions,” Amodei wrote.

It’s not yet clear how the Pentagon plans to respond.

February 27, 2026: This story was updated to reflect a public statement by Emil Michael, the Pentagon’s undersecretary for research and engineering, about Amodei.




Source link

Top-Anthropic-executive-limits-his-childs-YouTube-algorithm-access-It.jpeg

Top Anthropic executive limits his child’s YouTube algorithm access: ‘It freaks me out’

Jack Clark, Anthropic’s head of policy, says he spends his days thinking about AI guardrails.

At home, he says he’s building guardrails, too — for his own kids.

During an interview with “The Ezra Klein Show,” Clark said he limits how much technology his toddler uses and is uneasy about algorithmic exposure for his children.

“I have the classic Californian technology executive view of not having that much technology around for children,” Clark, who recently returned from parental leave and has a newborn at home, said. “I think finding a way to budget your child’s time with technology has always been the work of parents and will continue to be.”

He said technology is becoming more “ubiquitous,” making it “hard to escape” for parents.

At home, Clark said his toddler can watch “Bluey” and a few other shows on their smart TV, but he hasn’t allowed “unfettered access to the YouTube algorithm.”

“It freaks me out,” he added.

Clark’s approach echoes other tech leaders who limit their kids’ screen time.

In 2025, Miranda Kerr said she and her husband, Snap CEO Evan Spiegel, didn’t allow her then-14-year-old son to have phones or computers in his bedroom after 9:30 pm. In 2024, PayPal cofounder Peter Thiel said he limits his children’s screen time to 90 minutes a week. Apple’s cofounder Steve Jobs famously told the New York Times in 2010 that his kids hadn’t used an iPad.

“We limit how much technology our kids use at home,” Jobs said.

Clark says his parenting model is partially based on how he grew up. His father, who had a computer at his office, would let Clark use the machine — but would step in when screen time got excessive.

“My dad would let me play on the computer, and at some point he’d say: Jack, you’ve had enough computers today. You’re getting weird,” he said.

AI systems will need stronger parental controls, Clark said. Those guardrails, he said, will take on increased importance in the AI race — especially as children try to access systems intended for adults.

“So we’re going to need to build pretty heavy parental controls into this system,” he said. “We serve ages 18 and up today, but obviously, kids are smart, and they’re going to try to get onto this stuff.”

Clark and Anthropic didn’t immediately respond to a request for comment from Business Insider. When reached for comment, a YouTube spokesperson pointed Business Insider to a guide on the platform’s website about age-appropriate experiences and parental controls.




Source link

Anthropic-is-dropping-its-signature-safety-pledge-amid-a-heated.jpeg

Anthropic is dropping its signature safety pledge amid a heated AI race

Anthropic is no longer daring to be quite so different.

The AI startup founded by former OpenAI employees, laser-focused on the proper development of the technology, is weakening its foundational safety principle.

In a statement on Tuesday, Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.

The new policy means Anthropic is far less constrained by safety concerns at a moment when its flagship chatbot, Claude, is upending financial markets and sparking concerns about the death of software.

As part of the changes, Anthropic now has separate safety recommendations, called its Responsible Scaling Policy, for itself and the AI industry as a whole. The policy was loosely modeled after the US government’s biosafety level (BSL) standards

Anthropic’s chief science officer, Jared Kaplan, told Time Magazine that the responsible scaling policy was not in keeping with the current state of the AI race.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan told Time. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

The new policy still includes a commitment to delay the development or release of “a highly capable” AI model, but only in more limited circumstances.

In a lengthy blog post, Anthropic cited “an anti-regulatory political climate” as part of the reason for its decision. The company and its CEO, Dario Amodei, have pushed for AI regulations with some success on the state level, but without any major steps at the federal level.

“We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust,” the company wrote. “But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”

The company said the scaling policy was always intended to be “a living document,” which was outlined in the first version in 2023. That said, Amodei has previously said the safety policy was meant to mitigate the risks AI could unleash — even quoting Uncle Ben’s famous admonition to Peter Parker, aka Spider-Man.

“The power of the models and their ability to solve all these problems in biology, neuroscience, economic development, governance, and peace, large parts of the economy, those come with risks as well, right?” Amodei told podcaster Lex Fridman in November 2024. “With great power comes great responsibility.”

Anthropic said another reason for changing the standards is that higher theoretical levels of risk, ASL-4 and beyond, in their framework, cannot be contained by any one company alone. (In the biosecurity world, BSL-4 refers to the highest level of protection that an extremely small number of labs implement to handle pathogens like the Ebola virus.)

Safety is the core of Anthropic’s soul

Amodei has repeatedly said his company’s commitment to safety is evident in one of its first major decisions: holding back on releasing Claude in the summer of 2022.

Looking back on the move, Amodei has said that Anthropic was worried that it could not develop safeguards quickly enough for the public release of a breakthrough technology. OpenAI released ChatGPT in November 2022, kick-starting the AI race. Months later, Anthropic finally released Claude.

“Now, that was very commercially expensive,” Amodei said during a recent interview with billionaire and investor Nikhil Kamath. “We probably seeded the lead on consumer AI because of that.”

One of Claude’s previous training documents is internally referred to as the “Soul doc,” an example of rhetoric that would be out of place at most other AI companies.

Kamath pressed Amodei on how he responds to critics who say Anthropic is just pushing regulation to stop the growth of future competitors. Amodei said the 2022 decision was an example of how is company backs up its talk on safety. He also pointed to advocating for US export controls on advanced chips to China, a position that Nvidia CEO Jensen Huang has criticized.

“Anyone who thinks we benefit from being the only ones to do that, it’s really hard to come up with a picture where that’s the case,” Amodei said. “You look at any one of these and, ‘okay, fine,’ but you put enough of them together, and I don’t know, I ask you to judge us by our actions.”




Source link