Chong Ming Lee, Junior News Reporter at Business Insider's Singapore bureau.

Anthropic says it will pay 100% of the grid upgrade costs tied to its AI data centers

Anthropic says it’s going to foot the bill for electricity price increases tied to its data centers.

“We will pay for 100% of the grid upgrades needed to interconnect our data centers,” Anthropic said in a blog post published Wednesday, adding that it will absorb costs that might otherwise be passed on to American households.

Anthropic said it will secure additional power to avoid pushing up electricity prices and invest in “grid optimization tools” designed to reduce strain and keep prices low.

“The country needs to build new data centers quickly to maintain its competitiveness on AI and national security,” Anthropic said. “But AI companies shouldn’t leave American ratepayers to pick up the tab.”

Anthropic’s pledge comes months after the company said it is investing $50 billion in AI infrastructure in the US, beginning with data centers in Texas and New York.

Tech giants are pouring staggering sums into AI infrastructure as they race to expand data center capacity, a buildout that has drawn scrutiny over rising electricity costs.

In November, Meta said it would invest $600 billion in the US “to support AI technology, infrastructure, and workforce expansion.” Apple said in August it would add another $100 billion to its US infrastructure spending, bringing its total investment to $600 billion.

Meanwhile, utility bills are climbing. Electric and gas utilities sought $31 billion in rate increases from state regulators last year, more than double the $15 billion requested the year before, according to a study published last month by PowerLines, a nonprofit that advocates for utility customers. Many utilities have cited power demand from data centers as the key factor for rate increases.

President Donald Trump has urged Big Tech to prevent data centers from pushing up electricity costs.

“I never want Americans to pay higher Electricity bills because of Data Centers,” Trump wrote last month in a post on Truth Social.

The “big technology companies who build them,” the president said, “must pay their own way.”

Microsoft last month introduced similar measures, saying it would pay utility rates high enough to cover the cost of its data centers’ electricity use and reduce the impact of its data center expansion on local residents.




Source link

Amazons-8-billion-Anthropic-investment-balloons-to-61-billion.jpeg

Amazon’s $8 billion Anthropic investment balloons to $61 billion

Talk about return on investment. Amazon’s bet on Anthropic looks like an enormous win, at least on paper.

The cloud giant disclosed on Friday that it holds $45.8 billion of convertible notes and $14.8 billion of nonvoting preferred stock in the AI startup. Taken together, the figures show that Amazon’s Anthropic stake is now worth $60.6 billion.

Amazon has invested $8 billion in Anthropic since late 2023, indicating a seven-fold increase in value. If borne out, that would rank among the most lucrative strategic technology investments the company has ever disclosed.

The two companies have a deep commercial tie-up. Anthropic has committed to buy 1 million of Amazon’s Trainium chips, binding one of the leading AI labs closely to Amazon Web Services.

Anthropic last raised $13 billion in September at a $183 billion post-money valuation, following a $3.5 billion round in March that valued the company at $61.5 billion. The AI startup is in talks for another funding round that would push its value to $350 billion.

The convertible notes held by Amazon convert to preferred stock as Anthropic raises additional capital. So every time the startup closes a round, Amazon gets valuable new stock in one of the hottest AI companies on the planet.

Some of the upside has already flowed through to Amazon’s earnings. Conversions in 2025 generated about $5.6 billion in recognized gains, and Amazon booked a further $7.2 billion upward adjustment to its “other income” in the third quarter as Anthropic’s valuation climbed.

An Amazon spokesperson told Business Insider that the value of the company’s Anthropic stake rose from $38.5 billion in the third quarter to $60.6 billion in the fourth. The company expects to book a further $15 billion gain in first-quarter “other income” as some of the notes convert to nonvoting preferred stock, the spokesperson added.

Amazon also disclosed that these valuations relied on “significant judgment.” The company classified the convertible notes as “Level 3” assets, meaning their values are based on unobservable inputs and Amazon’s own assumptions rather than market prices, the company disclosed.

That’s common with stakes in startups, which don’t have securities that trade regularly on liquid public markets. That’s what IPOs are for — and Anthropic is reportedly eyeing a listing this year.

Have a tip? Contact this reporter via email at ekim@businessinsider.com or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.




Source link

Katherine Li, West Coast breaking news reporter at the Business Insider.

Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry

The rivalry between OpenAI and Anthropic intensified this week.

The two companies released dueling new AI models on Thursday and had back-to-back podcast appearances on “TBPN.”

On Thursday, Anthropic unveiled Claude Opus 4.6, an upgraded model that the company says would improve performance on office productivity and coding tasks, with an expanded “context window” that allows it to work through longer documents and more complex projects in a single session.

Meanwhile, OpenAI punched back with its own new coding-focused model called GPT-5.3-Codex, which the company says runs faster, uses fewer computing resources, and can generate and manage complex software from English instructions. The new version also comes alongside a stand-alone Codex desktop app.

Both Sam Altman, the OpenAI CEO, and Sholto Douglas, one of Anthropic’s leading researchers, appeared on the “TBPN” podcast in back-to-back chats with show host John Coogan and Jordi Hays.

“I think we will be heading towards a workflow where a lot of people just feel like they’re managing a team of agents,” said Altman. “And as the agents get better, they’ll keep operating at a higher and higher level of abstraction.”

Douglas, who appeared in the subsequent timeslot, told Coogan and Hays that users have been comparing previous Anthropic and OpenAI models, and they have noticed some key differences.

“The OpenAI models were a bit better at trying really, really, really hard on tough problems, but the Anthropic models were much faster and so forth,” Douglas said.

“And so they worked on speed while we worked on making the models much, much better at really, really tough problems,” Douglas added of the Opus 4.6.

The latest release is part of a long-running competition between Anthropic and OpenAI, dating back to 2021, when a group of OpenAI researchers left to form Anthropic, aiming to develop safer and more controlled AI systems.

A big week for Anthropic

This week, Anthropic’s launch of industry-specific plugins triggered a stock market sell-off as Wall Street worried about AI’s impact on software.

Anthropic also took a subtle shot at OpenAI with a series of ads released this week, including one that will air during the Super Bowl.

The ads feature unnamed humanized AIs dropping ads in the middle of their advice, alongside the promise that its model, Claude, will remain ad-free.

OpenAI announced in January that ads are coming to ChatGPT for users of the free version.

Altman subsequently hit back, calling Anthropic “dishonest” and defending ChatGPT as a product that brings AI “to billions of people who can’t pay for subscriptions.” He also clarified that the ads will be “clearly labeled” to differentiate themselves from the chatbot’s answers to queries.

“We are not stupid. We respect our users. We understand that if we did something like what those ads depict, people would rightfully stop using the product,” Altman told the “TBPN” podcast on Thursday.

“Our first principle with ads is that we’re not going to put stuff into the LLM stream,” Altman added. “That would feel crazy dystopic, like a bad sci-fi movie.”




Source link

7-of-the-most-interesting-quotes-from-Anthropic-CEOs-sprawling.jpeg

7 of the most interesting quotes from Anthropic CEO’s sprawling 19,000-word essay about AI

Dario Amodei still has a lot to say.

On Monday, the Anthropic CEO dropped an over 19,000-word essay entitled “The Adolescence of Technology” on the future of AI on Monday, opining on everything from his fellow CEOs to feudalism, and even the Unabomber.

Best known for his warning that AI could eliminate up to 50% of entry-level white-collar jobs in the next 1 to 5 years, Amodei has tangled with Nvidia CEO Jensen Huang and the Trump White House over his views.

Here are seven of the most alarming and surprising quotes.

‘This is a serious civilizational challenge’

Amodei remains optimistic about AI overall, but his essay detailed “an intimidating gauntlet that humanity must run” to reap the benefits of AI without letting the breakthrough technology destroy the world.

“I believe if we act decisively and carefully, the risks can be overcome — I would even say our odds are good. And there’s a hugely better world on the other side of it,” he wrote. “But we need to understand that this is a serious civilizational challenge.”

AI development can’t be stopped, Amodei wrote, a conclusion even some of AI’s skeptics share. The financial and security benefits are just too massive for the private and public sectors to pass up.

It’s why winning the AI race and doing so in an ethical way is so critical, he concludes.

‘This is like selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing’

Jensen Huang hasn’t changed Amodei’s mind on China.

“A number of complicated arguments are made to justify such sales, such as the idea that ‘spreading our tech stack around the world’ allows ‘America to win’ in some general, unspecified economic battle,” Amodei said. “In my view, this is like selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing and so the US is ‘winning.'”

In November, Nvidia announced a partnership with Anthropic that includes an investment of up to $10 billion in the AI startup. The news sparked speculation that tensions between Amodei and Huang might be cooling.

Whatever the status of their relationship, Amodei is resolute that it is a horrendous decision to allow US companies to sell advanced chips to China.

“China is several years behind the US in their ability to produce frontier chips in quantity, and the critical period for building the country of geniuses in a data center is very likely to be within those next several years,” Amodei wrote. “There is no reason to give a giant boost to their AI industry during this critical period.”

‘Many people have told me that we should stop doing this, that it could lead to unfavorable treatment’

Amodei would like his critics to see the scoreboard.

Anthropic’s leader hasn’t tried to curry favor with the White House, nor has he vocally embraced President Donald Trump’s AI policies to the same degree as his rival CEOs. Amodei’s outspoken call for AI regulation even led David Sacks, Trump’s AI czar, to publicly rebuke him.

None of it has changed Amodei’s view that the AI industry “needs a healthier relationship with government — one based on substantive policy engagement rather than political alignment.”

“Many people have told me that we should stop doing this, that it could lead to unfavorable treatment, but in the year we’ve been doing it, Anthropic’s valuation has increased by over 6x, an almost unprecedented jump at our commercial scale,” he wrote.

Of all of his hopes, this one appears the unlikeliest. Already, AI CEOs have formed dueling super PACs ahead of the 2026 midterm elections.

‘It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless’

The tech elite made AI, and they should help society grapple with its fallout, he wrote in the essay. Amodei has long called on governments to prepare for mass job displacement. In one of the most eyebrow-raising parts of the essay, Anthropic CEO detailed what his fellow billionaires and companies must do.

Beyond philanthropy, Amodei said companies need to be “creative” in how they stave off layoffs.

In the long term, he wrote, “It may be feasible to pay human employees even long after they are no longer providing economic value in the traditional sense.”

‘Some AI companies have shown a disturbing negligence towards the sexualization of children’

One of the biggest themes of Amodei’s essay is the risk that AI companies themselves pose. It’s a conclusion that he admits is “somewhat awkward” for him to reach. As an example, he points to the roiling topic of the sexualization of children. While he does not name xAI directly, Grok is facing investigations in multiple countries over the non-consensual sexualization of images of real people.

“Some AI companies have shown a disturbing negligence towards the sexualization of children in today’s models, which makes me doubt that they’ll show either the inclination or the ability to address autonomy risks in future models,” he wrote.

Overall, he expressed skepticism that AI companies will sacrifice profit for broader societal good. “Ordinary corporate governance,” Amodei wrote, is ill-equipped to address his worries.

Amodei said that fears that AI models may defy orders and perhaps even try to eliminate humanity are complicated by bad actors in the industry who aren’t as transparent about the risks they are seeing in their models.

“While it is incredibly valuable for individual AI companies to engage in good practices or become good at steering AI models, and to share their findings publicly, the reality is that not all AI companies do this, and the worst ones can still be a danger to everyone even if the best ones have excellent practices,” he wrote.

‘Models are likely now approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon’

Amodei doesn’t see the largest risks to humanity coming from AI pursuing total domination, but rather in what AI could enable humans to unleash.

Amodei described his fears that AI is lowering the barrier of entry necessary to make killer biological weapons. His greatest concern is that AI could provide the step-by-step know-how that could eventually enable even an average person to produce a bioweapon.

AI companies, Amodei said, need to ensure they create sufficient backstops to block such inquiries, including by making it difficult for hackers to jailbreak models. Adding such security is expensive, Amodei said, noting that these measures are “close to 5% of total inference costs” for some of the companies’ models.

“I am concerned that over time there may be a prisoner’s dilemma where companies can defect and lower their costs by removing classifiers,” he wrote. “This is once again a classic negative externalities problem that can’t be solved by the voluntary actions of Anthropic or any other single company alone.”

‘I would support civil liberties-focused legislation (or maybe even a constitutional amendment)’

Amodei is one of the AI industry’s most vocal proponents of AI legislation. While Meta and Microsoft supported a federal preemption of state-level AI laws, Anthropic supported AI transparency bills in California and New York that are now law.

Throughout the essay, Amodei outlined multiple areas for future legislation, including industry-wide transparency requirements like those at the state level. Even he concludes that new laws might not be enough.

“The rapid progress of AI may create situations that our existing legal frameworks are not well designed to deal with,” he wrote.

It’s why Amodei said he would go so far as to support a constitutional amendment. The US has not amended the Constitution since 1992, when the over two-century-long battle to add a limitation on congressional pay finally passed the 38th state legislature.

“I would support civil liberties-focused legislation (or maybe even a constitutional amendment) that imposes stronger guardrails against AI-powered abuses,” he wrote.




Source link

Anthropic-says-its-buzzy-new-Claude-Cowork-tool-was-mostly.jpeg

Anthropic says its buzzy new Claude Cowork tool was mostly built by AI — in less than 2 weeks

Anthropic’s new working agent was largely built by Claude itself — the latest example of AI coding tools speeding up product development.

On Monday, Anthropic announced the release of Cowork, a “more approachable” AI tool accompanying Claude Code that’s geared toward fulfilling users’ requests that are unrelated to programming. Users grant the agentic AI tool access to specific files on their computer and prompt it to complete tasks.

Boris Cherny, head of Claude Code, said that Anthropic’s AI coded “pretty much all” of Cowork.

“@claudeai wrote Cowork,” Product Manager Felix Rieseberg wrote on X. “Us humans meet in-person to discuss foundational architectural and product decisions, but all of us devs manage anywhere between 3 to 8 Claude instances implementing features, fixing bugs, or researching potential solutions.”

As a result, Rieseberg said the first edition of Cowork came together quickly.

“This is the product that my team has built here, we sprinted at this for the last week and a half,” he said during a livestream with Dan Shipper.

Over the holidays, Rieseberg said that Anthropic saw its customers using Claude for an increasing number of non-coding-related tasks.

“This sort of like the research preview, very early Alpha, a lot of rough edges, as you’ve already seen, right?” he said.

Cowork is initially available to Claude Max subscribers on the Mac app.

The launch has made a splash in the tech world, with many online users praising the product and its accessibility.

“I think that’s a really smart product,” Datasette co-creator Simon Willison wrote in a blog about his experience. “Claude Code has an enormous amount of value that hasn’t yet been unlocked for a general audience, and this seems like a pragmatic approach.”

“This is big,” Reddit cofounder Alexis Ohanian wrote on X.

Because granting an AI agent access and the ability to take action on specific computer files comes with risk, Anthropic cautions that Cowork users should be careful.

“By default, the main thing to know is that Claude can take potentially destructive actions (such as deleting local files) if it’s instructed to,” the company said. “Since there’s always some chance that Claude might misinterpret your instructions, you should give Claude very clear guidance around things like this. “

The latest in a flurry of AI announcements

AI companies wasted no time in launching new offerings and partnerships to kick off the new year.

On Sunday, Anthropic announced Claude for Healthcare, a major addition to its healthcare and life sciences offerings. Its release came on the heels of rival OpenAI signaling its investment in the healthcare space with ChatGPT Health.

Amid AI bubble chatter and scrutiny on the increasing AI investments made by tech companies, Anthropic CEO Dario Amodei has argued that Anthropic has built a more sustainable business model that allowed it to make more educated bets on its future build-out. While he did not name OpenAI or CEO Sam Altman directly, he made some thinly veiled criticisms of his former company throughout the event.

“I think because we focus on enterprise, I think we have a better business model,” Amodei said at The New York Times’ Dealbook Summit. “I think we have better margins. I think we’re being responsible about it.”

Google, which some experts saw as overtaking OpenAI at the end of 2025, announced a major deal with Apple to have Gemini power Siri’s artificial intelligence capabilities.




Source link