Meta-is-running-intensive-AI-training-weeks-to-get-employees.jpeg

Meta is running intensive AI training weeks to get employees testing agents and coding with Claude

At Meta, there’s no escaping AI.

The company has begun running intensive AI training weeks to encourage staff to experiment more with AI tools, according to Meta employees who spoke with Business Insider and public LinkedIn posts.

The weeks have involved a series of hackathons, demos, and other projects where Meta staff show off what they can build with AI, regardless of their job title or seniority. Some of the projects are built with Anthropic’s Claude Code, which the company has adopted widely internally.

This is part of Meta’s latest initiative to embrace AI across its workforce, which has included setting org targets for AI adoption and reorganizing some teams around AI-native “pods.” Similar pushes are taking place across corporate America as companies aim to become more efficient with AI. Google has told some employees their AI use will be considered in performance reviews, and JPMorgan has told software engineers it expects them to be harnessing AI to save time.

“It’s well-known that this is a priority and we’re focused on using AI to help employees with their day-to-day work,” a Meta spokesperson told Business Insider.

Internally, these sessions have been given names such as “AI Transformation Week.” During the sessions, some employees were given demos on how agents and other tools could work across their laptops and phones, an employee who attended some sessions told Business Insider.

Some of these AI weeks took place in March. One Meta employee told Business Insider that some teams held their own AI weeks at the end of last year, during which staff used vibe coding to create something valuable with no strict output requirements.

At one hackathon during Meta’s AI Transformation Week, attendees sat through demos of its own internal AI tools, Claude Code, and others, according to a LinkedIn post from an employee. AI agents are a big focus, with the aim of having employees guide autonomous systems that can handle everything from coding to compiling reports.

Design is also part of the effort. One Meta product manager touted building an interactive vibe-coding guide for designing products at Meta using Claude Code, according to her website.

Pods and goals

While some employees were brushing up on AI this week, Meta laid off several hundred employees across Reality Labs, the division overseeing its virtual reality projects, and other orgs. The company has spent billions on hiring top AI talent and building out infrastructure. However, it has yet to launch its long-awaited frontier model, internally codenamed Avocado.

While that has given Meta the perception of being behind in the AI race, a top Wall Street analyst said earlier this month that the company’s aggressive internal AI transformation could, in fact, give it “insurmountable” cost and performance advantages.

Meta has been making other changes in an effort to be what CEO Mark Zuckerberg has described as “AI native.” In a division of Reality Labs, the organization overseeing Meta’s virtual reality projects, employees were rebranded with titles like “AI builder” and were organized into AI-native “pods,” Business Insider previously reported.

The company has also set specific goals for adopting AI tools that vary across teams, according to an internal document reviewed by Business Insider.

On Tuesday, Meta’s CTO Andrew Bosworth said he would take leadership over the company’s efforts to adopt AI for internal use, an initiative known internally as “AI for Work,” according to a copy of the post seen by Business Insider and first reported by The Wall Street Journal.

“These tools hold the promise of giving each employee so much more power to accomplish their work,” Bosworth said in a post on X.




Source link

Image of Lakshmi Varanasi

Claude hits No. 1 on App Store as ChatGPT users defect in show of support for Anthropic’s Pentagon stance

While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.

Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.

Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of War to deploy AI models in its classified network.

OpenAI’s agreement has left some loyal ChatGPT users uneasy about OpenAI’s ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.

As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple’s App Store, trailing ChatGPT.


A screencap of the app store

BI



Converts have taken to social media to share screenshots documenting their switch.

Pop musician Katy Perry wrote that she was “done” on X, alongside a screenshot of Claude’s pricing page, with a red heart around the $20-per-month “Pro” plan.

Another X user, Adam Lyttle, wrote “Made the switch,” alongside a screenshot of his email inbox with a receipt from Anthropic and cancellation confirmation from OpenAI.

On Reddit’s ChatGPT subreddit, dozens of users say they’ve deleted their accounts and are urging others to do the same.

“Cancel ChatGPT” has become a common refrain online, while some users have taken a more personal tone, saying Altman’s move “crossed the line.”

The agreement hasn’t polarized all AI users, however.

In one Reddit thread, several commenters said the news does not affect their choice of AI model, arguing that Anthropic’s work with Palantir raises similar concerns. In November 2024, Anthropic, Palantir, and Amazon Web Services struck an agreement to provide US intelligence and defense agencies access to Claude models.

After Secretary of War Pete Hegseth said he would designate Anthropic as a “supply chain risk to national security,” Anthropic said it would “challenge any supply chain risk designation in court.”

In his Friday post, Altman said the Department of War had agreed with two of OpenAI’s safety principles.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

By Saturday afternoon, OpenAI published a more detailed description of its contract with the DoW, including the specific language it used surrounding the use of its models for surveillance and autonomous weapons.

On the topic of autonomous weapons, OpenAI said:

The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

On the topic of mass surveillance, OpenAI said:

The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.

While some chatbot users suggested it’s all fair in business, war, and federal procurement, others suggested the Pentagon’s stance may have handed Anthropic a public relations win.

X user Tae Kim joked that Hegseth might need a new title: “Secretary Hegseth Chief of Claude Marketing.”




Source link

Anthropic-says-its-buzzy-new-Claude-Cowork-tool-was-mostly.jpeg

Anthropic says its buzzy new Claude Cowork tool was mostly built by AI — in less than 2 weeks

Anthropic’s new working agent was largely built by Claude itself — the latest example of AI coding tools speeding up product development.

On Monday, Anthropic announced the release of Cowork, a “more approachable” AI tool accompanying Claude Code that’s geared toward fulfilling users’ requests that are unrelated to programming. Users grant the agentic AI tool access to specific files on their computer and prompt it to complete tasks.

Boris Cherny, head of Claude Code, said that Anthropic’s AI coded “pretty much all” of Cowork.

“@claudeai wrote Cowork,” Product Manager Felix Rieseberg wrote on X. “Us humans meet in-person to discuss foundational architectural and product decisions, but all of us devs manage anywhere between 3 to 8 Claude instances implementing features, fixing bugs, or researching potential solutions.”

As a result, Rieseberg said the first edition of Cowork came together quickly.

“This is the product that my team has built here, we sprinted at this for the last week and a half,” he said during a livestream with Dan Shipper.

Over the holidays, Rieseberg said that Anthropic saw its customers using Claude for an increasing number of non-coding-related tasks.

“This sort of like the research preview, very early Alpha, a lot of rough edges, as you’ve already seen, right?” he said.

Cowork is initially available to Claude Max subscribers on the Mac app.

The launch has made a splash in the tech world, with many online users praising the product and its accessibility.

“I think that’s a really smart product,” Datasette co-creator Simon Willison wrote in a blog about his experience. “Claude Code has an enormous amount of value that hasn’t yet been unlocked for a general audience, and this seems like a pragmatic approach.”

“This is big,” Reddit cofounder Alexis Ohanian wrote on X.

Because granting an AI agent access and the ability to take action on specific computer files comes with risk, Anthropic cautions that Cowork users should be careful.

“By default, the main thing to know is that Claude can take potentially destructive actions (such as deleting local files) if it’s instructed to,” the company said. “Since there’s always some chance that Claude might misinterpret your instructions, you should give Claude very clear guidance around things like this. “

The latest in a flurry of AI announcements

AI companies wasted no time in launching new offerings and partnerships to kick off the new year.

On Sunday, Anthropic announced Claude for Healthcare, a major addition to its healthcare and life sciences offerings. Its release came on the heels of rival OpenAI signaling its investment in the healthcare space with ChatGPT Health.

Amid AI bubble chatter and scrutiny on the increasing AI investments made by tech companies, Anthropic CEO Dario Amodei has argued that Anthropic has built a more sustainable business model that allowed it to make more educated bets on its future build-out. While he did not name OpenAI or CEO Sam Altman directly, he made some thinly veiled criticisms of his former company throughout the event.

“I think because we focus on enterprise, I think we have a better business model,” Amodei said at The New York Times’ Dealbook Summit. “I think we have better margins. I think we’re being responsible about it.”

Google, which some experts saw as overtaking OpenAI at the end of 2025, announced a major deal with Apple to have Gemini power Siri’s artificial intelligence capabilities.




Source link

Henry Chandonnet is pictured

The creator of Anthropic’s Claude Code likes to hire engineers who do ‘side quests’ like making kombucha

Want a job at Anthropic? It might help to get a hobby.

The AI boom is changing the job requirements for an engineer. Not only do they need to have coding skills, but they also must know how to operate vibecoding tools and stay up to date with new AI models.

Anthropic leader Boris Cherny looks for something else: “Side quests.”

“When I hire engineers, this is definitely something I look for,” he said on “The Peterman Pod.”

Cherny’s definition of side quests includes “cool weekend projects,” like someone who’s “really into making kombucha.” It’s a sign that the engineer is curious and interested in other things, he said.

Much of Cherny’s own growth came from his side projects. Cherny is now a key figure at Anthropic. He created Claude Code, a tool that is now popular with engineers across the country.

“These are well-rounded people,” he said. “These are the kind of people I enjoy working with.”

Cherny also said he prefers that his new hires be “generalists.”

He gave the example of an engineer who can code, but is also able to work on product and design. That all-star engineer also seeks out user feedback.

“This is how we recruit for all functions, now,” he said. “Our project managers code, our data scientists code, our user researcher codes a little bit.”

Cherny isn’t alone in pushing for jobs to become more generalist. Figma CEO Dylan Field said in October that AI was causing job titles to merge, resulting in everyone being a “product builder.”

What else is Anthropic looking for? For some time, it monitored whether candidates use AI in their applications.

In May, Business Insider reported that Anthropic asked candidates for certain jobs not to use AI in their written responses so the company could test their “non-AI-assisted communication skills.”

Anthropic changed its policy in July, allowing candidates to seek out assistance from Claude.

For the younger engineers, a job at Anthropic may be hard to come by. In May, CPO Mike Krieger said on “Hard Fork” that he was focused on hiring experienced engineers — and had “some hesitancy” with entry-level workers.

On the podcast, Cherny said that his love of generalists came from his career trajectory. Working at startups since 18, Cherny had to do everything, he said.

“At big companies, you get forced into this particular swim lane,” he said. “It’s just so artificial.”




Source link