Judge-temporarily-blocks-the-Pentagon-from-declaring-Anthropic-a-national.jpeg

Judge temporarily blocks the Pentagon from declaring Anthropic a national security risk

A federal judge has granted Anthropic a major reprieve as the AI company challenges the Pentagon’s effective blacklisting.

On Thursday, US District Judge Rita Lin granted Anthropic’s request for a preliminary injunction to temporarily block the “Presidential Directive” that ordered federal agencies to stop using Anthropic’s technology, and Defense Secretary Pete Hegseth’s decision to formally label the AI frontier model maker as a “supply chain risk.”

Lin also stayed the effective date of the supply-chain designation, meaning that it cannot take place while the injunction is in place.

The decision is a victory for Anthropic and its CEO Dario Amodei, who refused to bow to Hegseth’s demands. It is not immediately clear if the Justice Department will appeal the decision. In the hours after talks with Anthropic fell apart, the Pentagon struck a deal with OpenAI.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said in a statement. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Spokespeople for the Pentagon and White House did not immediately respond to a request for comment.

In court filings, Anthropic officials said the risk designation could jeopardize potentially billions in revenue. If the injunction remains, Anthropic will be able to continue to do business with defense contractors.

Lin wrote in her decision that the injunction does not require the Defense Department to use Anthropic’s products or services.

Many in tech are closely watching the California case, since it tests whether the federal government can use some of its most severe powers to force a major AI company to agree to contractual terms. Microsoft, which filed an amicus brief in support of Anthropic, also said it was concerned about potential repercussions if companies like itself continued to partner with Anthropic.

Ahead of her ruling, Lin grilled the Justice Department over what she said looked like “an attempt to cripple Anthropic.” She said that the Pentagon could have simply discontinued using Claude, but instead, the Trump administration made repeated actions that appeared to be designed to “punish” the company.

“One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said during the hearing. “And specifically, my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.”

Beyond the California case, Anthropic has a separate suit pending in the D.C. Circuit over the supply chain risk designation.

It also remains to be seen how the White House and the broader Trump administration will treat Anthropic beyond the actions Lin’s ruling compels.

During the hearing, Deputy Assistant Attorney General Eric Hamilton repeatedly said that the Pentagon questions Anthropic’s “reliability and trustworthiness.” Hamilton said that defense officials are concerned Anthropic may try to improperly skew its AI models or shut off access.

In recent weeks, Hegseth, who met with Amodei, said the AI startup put “Silicon Valley ideology above American lives.” President Donald Trump decried the “WOKE COMPANY” run by ” Leftwing nut jobs” in a Truth Social post that was also part of the California lawsuit.

“Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump wrote on Truth Social on February 27.




Source link

This-Pentagon-announcement-on-an-Operation-Epic-Fury-soldiers-believed.jpeg

This Pentagon announcement on an Operation Epic Fury soldier’s ‘believed to be’ death is very unusual

The Pentagon last week announced the death of a US Army soldier killed while supporting Operation Epic Fury before a medical examiner had positively identified them. Former military spokespeople said that it was an unusual and awkwardly phrased departure from standard procedures.

In a press release titled, “DOW Identifies An Army Believed to Be Casualty,” the Department of Defense announced “the believed to be death” of Chief Warrant Officer 3 Robert M. Marzan, an Army reservist who died during an Iranian strike that also killed five other troops in Kuwait.

According to the release, Marzan, a soldier with the 103rd Sustainment Command, “was at the scene of the incident on March 1, 2026, and is believed to be the individual who perished at the scene. Positive identification of Chief Warrant Officer 3 Marzan will be completed by the medical examiner.”

A defense official told Business Insider a medical examiner has since confirmed Marzan’s identity.


Screenshot of the DoD announcement.

Screenshot of the DoD announcement.

war.gov/screengrab



Prematurely announcing a death risks misidentification, which can erode public trust if corrections are later required, two former military spokespeople told Business Insider. They also said attention to detail and clarity in these communications shows respect.

“When a service member is killed in combat, they deserve better than this,” Joe Plenzler, a retired Marine lieutenant colonel who worked in public affairs during the Global War on Terror, wrote in a post to LinkedIn.

He told Business Insider separately that “it’s a simple matter of respect to make sure that everything is accurate.”

US Central Command, which oversees American military operations in the Middle East, has reported seven US service members killed in action in the Iran war that began in late February. The fatal strike in Kuwait came as Iranian forces launched missiles into countries across the region. Marzan is the only service member in this conflict so far who has been described as “believed to be” dead.

Asked about the statement, the Office of the Secretary of Defense referred Business Insider to the Army, saying, “DOW announces, all follow-on questions go to the Army.” The Army did not provide comment to Business Insider.

Why the Pentagon statement was unusual

Typically, the military refers to a service member whose death has not yet been confirmed as “DUSTWUN,” short for “duty status — whereabouts unknown,” a retired Army spokesman who served during the wars in Iraq and Afghanistan told Business Insider. The term is used when a service member’s absence is involuntary and their status cannot yet be confirmed.

Announcing a death before positive identification by a medical examiner marks a break from norms that governed casualty reporting over two previous decades of war, the former spokesman said. The DUSTWUN designation is intended for situations where ongoing rescue efforts prevent an immediate determination, though recovery of remains is not always required to declare a service member deceased, according to military policies outlining casualty procedures.

“We had thousands of casualties throughout the conflicts in Iraq and Afghanistan and Syria,” said the retired Army official. “I don’t recall ever announcing someone as ‘believed to be a casualty.'”

Few communications are as important or sensitive as announcing a casualty, he said, describing a somber process honed after more than 7,000 US service member deaths during the Global War on Terror, according to Brown University’s Costs of War project.

Plenzler, the former Marine public affairs officer, told Business Insider that all communications related to sensitive topics, including casualties, were generally examined by at least three people before publication because of the heavy impact on public trust.

In his LinkedIn post, he recalled seeing “people removed from leadership positions for getting names incorrect during memorial services.”

While the former Army spokesman expressed disappointment in what they characterized as an awkwardly written DoD announcement, he also noted that many of the personnel who oversaw casualty communications during the height of the previous wars in the Middle East have since left the service, leaving newer troops to manage hard notifications and public messaging.

“We have been sort of out of this business now for several years,” he said.

Marzan, 54, lived in Sacramento and was assigned to an Iowa-based logistics unit. Business Insider could not reach a Marzan family member for comment.

Communicating casualty updates comes with a learning curve, the former Army spokesman said. The details of this release are unclear, but he said he hopes “they’ve learned a lesson from this.”

The announcement comes amid broader shifts in how the military communicates during fast-moving combat operations, including increased reliance on social media updates from combatant commands and the Pentagon. Communications once known for staid military-speak now often feature videos of US missile strikes or jets taking off, strong wartime rhetoric, or posts debunking Iranian “bogus claims.”




Source link

Katherine Li, West Coast breaking news reporter at the Business Insider.

AI researchers rally in support for Anthropic as company says it risks losing $5 billion in Pentagon feud

Employees at rival companies — including OpenAI — are rallying behind Anthropic as the startup warns its escalating dispute with the Pentagon could cost $5 billion in lost business.

More than 30 researchers from OpenAI and Google, including Jeff Dean, the chief scientist of Google DeepMind, filed a joint amicus brief on Monday supporting Anthropic in its legal battle with the government. The employees signed in a personal capacity and do not represent their companies’ official views.

Their filing argues that the Pentagon’s decision to label Anthropic a “supply-chain risk” could harm the broader US AI industry.

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.

The dispute stems from a breakdown in negotiations between Anthropic and the Pentagon over guardrails around how its AI models could be used, particularly around mass domestic surveillance and autonomous lethal weapons.

Last month, Defense Secretary Pete Hegseth said that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” marking a dramatic expansion of the “supply chain risk” designation.

Anthropic has since sued the government in two courts, arguing the decision violates its First Amendment rights and unfairly retaliates against the company.

In court filings, Anthropic executives warned that the fallout is already hitting the company’s finances. Chief financial officer Krishna Rao wrote in a court statement that hundreds of millions of dollars in expected revenue tied to Pentagon-related work are at risk this year. If the government succeeds in discouraging companies from working with Anthropic more broadly, Rao added, the company could ultimately lose up to $5 billion in sales, which is roughly equivalent to its total revenue since commercializing its AI technology in 2023.

Anthropic’s chief commercial officer, Paul Smith, wrote in a separate court statement that the pressure from the government is causing business partners to take steps that “reflect deep distrust and a growing fear of associating with Anthropic.” Smith added that some customers have paused negotiations or demanded escape clauses, while others have canceled meetings entirely after the supply-chain designation.

The situation has also drawn criticism from industry leaders. OpenAI CEO Sam Altman, despite singing its own contract with the Pentagon after Anthropic’s fell apart, wrote on social media that enforcing the supply chain risk designation “would be very bad for our industry and our country.”

Major cloud providers like Amazon and Microsoft have said they will continue offering Anthropic’s Claude AI models to customers without ties to the Pentagon.

Anthropic is now seeking a temporary court order that would allow it to continue working with military contractors while the legal fight continues. The first hearing could take place in San Francisco as soon as Friday.

The Pentagon did not immediately respond to a request for comment outside normal business hours.




Source link

1772948109_The-fallout-over-OpenAIs-Pentagon-deal-is-growing.jpeg

The fallout over OpenAI’s Pentagon deal is growing

Many other OpenAI staffers have also publicly criticized the company’s Pentagon deal.

“i personally don’t think this deal was worth it,” Aidan McLaughlin, a research scientist at OpenAI, wrote on X.

Another employee told CNN that many of them “really respect” Anthropic for refusing the Pentagon’s deal.

Clive Chan, a technical staffer, wrote in an X post that he believed OpenAI’s contract barred the use of its models for mass weapons or mass domestic surveillance. Chan wrote that he’s advocating for the company to share more information.

“If we later learn this is not the case, then I will advocate internally to terminate the contract,” Chan wrote.

Even before the deal, nearly 900 former and current OpenAI and Google staffers signed a joint petition supporting Anthropic, one of their primary competitors, and opposing the use of their companies’ technology for weapons that can kill without human oversight and mass surveillance.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the petition said.




Source link

The-fallout-over-OpenAIs-Pentagon-deal-is-growing.jpeg

The fallout over OpenAI’s Pentagon deal is growing

Many other OpenAI staffers have also publicly criticized the company’s Pentagon deal.

“i personally don’t think this deal was worth it,” Aidan McLaughlin, a research scientist at OpenAI, wrote on X.

Another employee told CNN that many of them “really respect” Anthropic for refusing the Pentagon’s deal.

Clive Chan, a technical staffer, wrote in an X post that he believed OpenAI’s contract barred the use of its models for mass weapons or mass domestic surveillance. Chan wrote that he’s advocating for the company to share more information.

“If we later learn this is not the case, then I will advocate internally to terminate the contract,” Chan wrote.

Even before the deal, nearly 900 former and current OpenAI and Google staffers signed a joint petition supporting Anthropic, one of their primary competitors, and opposing the use of their companies’ technology for weapons that can kill without human oversight and mass surveillance.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the petition said.




Source link

OpenAIs-robotics-head-quits-after-companys-Pentagon-deal-This-was.jpeg

OpenAI’s robotics head quits after company’s Pentagon deal: ‘This was about principle’

Caitlin Kalinowski, a hardware executive who joined OpenAI from Meta in 2024 and leads its robotics division, said she is resigning from the company.

In a post on X on Saturday, Kalinowski criticized OpenAI’s recent deal with the Pentagon.

“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote.

She called her resignation a matter of principle, and said she still deeply respects OpenAI CEO Sam Altman and the team and is proud of their robotics work.

A spokesperson for OpenAI confirmed Kalinowski’s resignation and defended its deal with the Defense Department.

“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the spokesperson told Business Insider. “We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society, and communities around the world.”

OpenAI struck a deal with the Pentagon last week, allowing the Defense Department to use its AI products. The agreement came after its rival Anthropic refused a similar deal over concerns that the technology would be used for mass surveillance and autonomous weapons.

Anthropic has since been effectively blacklisted in Washington. President Donald Trump described the company as “radical woke” in a Truth Social post and demanded federal agencies stop using Anthropic’s technology. Secretary of Defense Pete Hegseth then designated Anthropic a supply-chain risk to national security and said Defense Department contractors would be barred from working with the company.

OpenAI’s decision to strike a deal with the Pentagon caused an immediate backlash. Some users ditched ChatGPT in protest. Anthropic’s chatbot, Claude, is now the No. 1 free app on the Apple App Store, unseating OpenAI’s ChatGPT. Claude’s US downloads increased 240% month over month in February.

Kalinowski’s exit is a setback for OpenAI’s robotics ambitions, which the company has been developing over the past year.

Over the last year, the company has quietly built a San Francisco lab that employs about 100 data collectors. Teams are training a robotic arm to do household chores as part of a broader push to build a humanoid robot. The company told employees in December it also plans to open a second lab in Richmond, California.

A source with knowledge of OpenAI’s plans also previously told Business Insider that the company is exploring several early-stage hardware initiatives — including robotics — but none are considered central to its core mission at this point.




Source link

Pentagon-official-details-the-holy-cow-moments-that-sparked-rift.jpeg

Pentagon official details the ‘holy cow’ moments that sparked rift with Anthropic

The Pentagon’s R&D chief said the Department of Defense was “scared” about Anthropic shutting off access to its AI during a critical moment.

During an appearance on the “All-In Podcast” posted on Friday, Undersecretary of Defense for Research and Engineering Emil Michael detailed two pivotal moments that culminated in the Pentagon formally designating Anthropic as a supply chain risk, effectively blacklisting one of the nation’s largest AI companies.

One of those instances, Michael said, was when Anthropic CEO Dario Amodei suggested that the impasse over how the Pentagon could deploy the AI startup’s models could be bridged with a phone call, even if it came during “a decisive moment.”

“I was giving these scenarios, these Golden Dome scenarios, and so on,” Michael said on “All-In Podcast,” describing President Donald Trump’s signature missile defense initiative.

“And he’s like, ‘Just call me if you need another exception.’ And I’m like, “But what if the balloon’s going up at that moment and it’s like a decisive action we have to take? I’m not going to call you to do something. It’s not rational.”

It’s not entirely clear what Anthropic would object to in the hypothetical Michael said he posed, though the implication is that some Golden Dome systems could have autonomous modes that fire weapons.

In the current US missile defense system, AI’s role is to provide rapid situational awareness and recommendations for human operators. AI could rapidly assess whether a detected launch poses a threat and recommend weapons to destroy it. Decisions on whether to listen to the recommendations are then made by air defense commanders.

Elsewhere in the interview, Michael said that part of the impasse with Anthropic is that he “can’t predict for the next 20 years what all the things we might use AI for.”

Michael, who was previously a top executive at Uber, said the department’s concerns about Anthropic began to escalate after the US conducted a targeted raid on Venezuela to capture Venezuelan President Nicolás Maduro. The assault raised major questions about sovereignty, and congressional democrats questioned the decision not to seek approval for the deployment of US forces.

In the wake of the raid, Michael said that an unnamed Anthropic executive called a Palantir executive to ask whether Anthropic’s AI models had been used to carry it out. The Pentagon accesses Anthropic’s AI models through a government cloud that is operated by Amazon Web Services and then run by Palantir, Michael said. (On February 27, President Donald Trump ordered federal agencies to stop using Anthropic’s AI, though that directive came with a six-month phase-out period.)

Michael said Palantir officials were so alarmed by Anthropic’s questions that they alerted him.

“I’m like, ‘Holy shit, what if this software went down, some guardrail kicked up, some refusal happened for the next fight like this one, and we left our people at risk,” Michael said, alluding to the US’s current war against Iran.

As talks grew heated, Michael said he felt like Anthropic turned the discussion “into a PR game” by publicly raising concerns about how the terms the Pentagon sought would not adequately account for potential misuse. Amodei has confirmed that Anthropic was particularly worried about the risks posed by fully autonomous weapons and how powerful AI models could be abused to spy on American citizens.

During the heated back-and-forth, Michael publicly called Amodei a “liar” with “a God-complex.”

On Thursday, the Pentagon said it formally notified Anthropic that it was declaring the company and its products to be a supply chain risk, the first time in history that label had been applied to a US company.

Amodei responded that his AI startup had “productive conversations” with the Pentagon in recent days, but Michael later said that no discussions were ongoing.

Anthropic has suggested it will challenge the designation in court, especially since Defense Secretary Pete Hegseth has said it prevents any defense contractor from doing business with Anthropic.

Asked about why the Pentagon went so far, Michael said the designation was not “punitive.”

“If their model has this policy bias, let’s call it, based on their constitution, their culture, their people, and so on,” he said. “I don’t want Lockheed Martin using their model to design weapons for me.”

Earlier this week, a Lockheed spokesperson said it would follow Trump and the Pentagon’s direction on whether it would continue to use Anthropic’s products. Michael also called out Boeing, describing how the airplane manufacturer could use Anthropic’s AI for non-defense tasks.

“So, Boeing wants to use Anthropic to build commercial jets — have at it,” he said. “Boeing wants to use it to build fighter jets. I can’t have that because I don’t trust what the outputs may be, because they’re so wedded to their own policy preferences.”

While Michael was critical of Anthropic, he praised xAI and Elon Musk for agreeing to the department’s terms, allowing it to deploy AI “for all lawful uses.”

Michael also praised OpenAI and its CEO, Sam Altman, for working with the Pentagon to quickly stand up another AI system capable of operating in classified settings, so the department can phase out Anthropic.

Altman and OpenAI have received significant blowback online for agreeing to work with the Pentagon. Altman publicly urged the department not to label Anthropic a supply chain risk.

“To his credit, I called him and said, ‘I need a solution if this thing goes sideways. I need multiple solutions. I’d like you to be one of them,” Michael said. “And he’s like, ‘Okay, well, what can I do for the country?’ I was like, ‘I need to get you up running as soon as I can.'”




Source link

Microsoft-says-Anthropics-products-can-stay-on-its-platforms-after.jpeg

Microsoft says Anthropic’s products can stay on its platforms after lawyers ‘studied’ the Pentagon supply chain risk designation

Microsoft said Anthropic’s AI tools aren’t going anywhere on its platforms despite the Pentagon blacklisting the startup.

The Pentagon on Thursday formally told Anthropic that “the company and its products are deemed a supply chain risk, effective immediately.” Defense Secretary Pete Hegseth has said the designation effectively bars companies with defense contracts from doing business with Anthropic.

Anthropic has said it plans to challenge the decision in court.

The designation follows a dispute between the AI startup and the Pentagon over how its Claude models could be used. Anthropic has said it will not allow its technology to be deployed for mass domestic surveillance or fully autonomous weapons.

A Microsoft spokesperson told Business Insider on Thursday that the company’s “lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers.”

Claude will still be available to customers through platforms such as M365, GitHub, and Microsoft’s AI Foundry, except for the Department of War, the spokesperson said in a statement.

“We can continue to work with Anthropic on non-defense related projects,” it added.

Microsoft has deepened its ties with Anthropic in recent months. In November, the companies said that Anthropic would spend $30 billion on Microsoft’s Azure cloud services, while Microsoft agreed to invest up to $5 billion in the startup.

Microsoft also said in September that it was integrating Anthropic’s models into Microsoft 365 Copilot alongside systems from OpenAI.

The Anthropic-Pentagon saga

In a statement published on Thursday evening, Anthropic CEO Dario Amodei said the company is in talks with the Defense Department even as it is preparing for court.

“I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” Amodei wrote.

However, Emil Michael, a Department of War official, said in a post on X following Amodei’s statement that negotiations are off the table.

“I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI,” Michael wrote.

Amodei also offered an apology in his statement after The Information reported that he had privately blasted the White House in a memo to staff after talks with the Pentagon fell apart.

In the memo, Amodei wrote that the administration disliked his company because he had not offered “dictator-style praise to Trump.”

“Anthropic has much more in common with the Department of War than we have differences,” Amodei said on Thursday.




Source link