Anthropic-is-dropping-its-signature-safety-pledge-amid-a-heated.jpeg

Anthropic is dropping its signature safety pledge amid a heated AI race

Anthropic is no longer daring to be quite so different.

The AI startup founded by former OpenAI employees, laser-focused on the proper development of the technology, is weakening its foundational safety principle.

In a statement on Tuesday, Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.

The new policy means Anthropic is far less constrained by safety concerns at a moment when its flagship chatbot, Claude, is upending financial markets and sparking concerns about the death of software.

As part of the changes, Anthropic now has separate safety recommendations, called its Responsible Scaling Policy, for itself and the AI industry as a whole. The policy was loosely modeled after the US government’s biosafety level (BSL) standards

Anthropic’s chief science officer, Jared Kaplan, told Time Magazine that the responsible scaling policy was not in keeping with the current state of the AI race.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan told Time. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

The new policy still includes a commitment to delay the development or release of “a highly capable” AI model, but only in more limited circumstances.

In a lengthy blog post, Anthropic cited “an anti-regulatory political climate” as part of the reason for its decision. The company and its CEO, Dario Amodei, have pushed for AI regulations with some success on the state level, but without any major steps at the federal level.

“We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust,” the company wrote. “But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”

The company said the scaling policy was always intended to be “a living document,” which was outlined in the first version in 2023. That said, Amodei has previously said the safety policy was meant to mitigate the risks AI could unleash — even quoting Uncle Ben’s famous admonition to Peter Parker, aka Spider-Man.

“The power of the models and their ability to solve all these problems in biology, neuroscience, economic development, governance, and peace, large parts of the economy, those come with risks as well, right?” Amodei told podcaster Lex Fridman in November 2024. “With great power comes great responsibility.”

Anthropic said another reason for changing the standards is that higher theoretical levels of risk, ASL-4 and beyond, in their framework, cannot be contained by any one company alone. (In the biosecurity world, BSL-4 refers to the highest level of protection that an extremely small number of labs implement to handle pathogens like the Ebola virus.)

Safety is the core of Anthropic’s soul

Amodei has repeatedly said his company’s commitment to safety is evident in one of its first major decisions: holding back on releasing Claude in the summer of 2022.

Looking back on the move, Amodei has said that Anthropic was worried that it could not develop safeguards quickly enough for the public release of a breakthrough technology. OpenAI released ChatGPT in November 2022, kick-starting the AI race. Months later, Anthropic finally released Claude.

“Now, that was very commercially expensive,” Amodei said during a recent interview with billionaire and investor Nikhil Kamath. “We probably seeded the lead on consumer AI because of that.”

One of Claude’s previous training documents is internally referred to as the “Soul doc,” an example of rhetoric that would be out of place at most other AI companies.

Kamath pressed Amodei on how he responds to critics who say Anthropic is just pushing regulation to stop the growth of future competitors. Amodei said the 2022 decision was an example of how is company backs up its talk on safety. He also pointed to advocating for US export controls on advanced chips to China, a position that Nvidia CEO Jensen Huang has criticized.

“Anyone who thinks we benefit from being the only ones to do that, it’s really hard to come up with a picture where that’s the case,” Amodei said. “You look at any one of these and, ‘okay, fine,’ but you put enough of them together, and I don’t know, I ask you to judge us by our actions.”




Source link

American-Eagles-bet-on-Sydney-Sweeney-and-Aeries-anti-AI-pledge.jpeg

American Eagle’s bet on Sydney Sweeney and Aerie’s anti-AI pledge are paying off big time

American Eagle’s marketing campaigns are giving the company a meaningful boost.

The retailer has launched a number of campaigns this year that have been at the center of viral moments online.

It looks like they’re paying off financially. Its stock has been up this year, and its total revenue was $1.4 billion for the third quarter that ended November 1, roughly 6% higher year-over-year.

American Eagle raised forward-looking guidance for the fourth quarter, and its stock rose at least 10% after hours on Tuesday.

The boost was driven by its intimates and loungewear brand, Aerie, which saw comparable sales rise by 11%. While other retailers are spending big on AI products for consumers, Aerie is making a promise not to use the technology.

Its pledge not to use AI in its ads, shared in an Instagram post, garnered tens of thousands of likes, making it the brand’s most popular post in the past year as of October, Metricool, which tracks social media engagement, told Business Insider in October.

Its success is also due in part to the star power it tapped into with Sydney Sweeney and Travis Kelce being featured in campaigns that gained traction on social media.

Sweeney’s “Great Jeans” partnership in July drew criticism online from some who said the campaign had a negative message that promoted “regressive” beauty standards. American Eagle tripled down on the campaign.

“Sydney Sweeney sells great jeans. She is a winner, and in just six weeks, the campaign has generated unprecedented new customer acquisition,” chief marketing officer Craig Brommers said in September.

In August, American Eagle released a clothing line in collaboration with NFL star Travis Kelce and his Tru Kolors brand, one day after he announced his engagement to Taylor Swift.

The two campaigns combined made up 44 billion impressions, as it attracted more customers “than ever before.”

“American Eagle launched its largest, most impactful advertising campaigns ever, which are delivering results by collaborating with high-profile partners who are defining culture,” president and executive creative director Jen Foyle said on the Tuesday call.

The brand is not done forming an all-star cast of celerity partners. The most recent campaign is with Martha Stewart, and American Eagle is betting it’ll be a hit with Gen Z customers.

“Martha Stewart resonates with Gen Z. That’s a perfect example of what we’re up to,” Foyle said.




Source link