Snowflake-makes-cuts-as-part-of-targeted-adjustments-to-the.jpeg

Snowflake makes cuts as part of ‘targeted adjustments’ to the software company’s strategy

The $59 billion cloud data company Snowflake made “targeted” staff cuts, the company confirmed.

“These actions reflect targeted adjustments to align our teams with Snowflake’s long-term strategy,” a Snowflake spokesperson said. “Such steps are a natural part of scaling a fast-growing company, and we remain firmly committed to sustained growth. Snowflake will continue investing in our people and products to deliver exceptional value and best-in-class support for customers. We see significant opportunities ahead and are confident in our strategy and the strength of our team.”

The cuts affected its technical writing and documentation team, according to several LinkedIn posts.

This team helped document and write instructions about Snowflake’s technology so that developers and other customers can better understand how to use it.

Snowflake declined to comment on the specifics of the teams affected.

Snowflake CEO Sridhar Ramaswamy previously told Business Insider that the company is focusing on becoming more operationally efficient while building more AI products.

Recently, several other tech companies have conducted layoffs as the industry moves to focus on AI. Atlassian laid off 10% of its workforce earlier this month, and Block laid off 40% of its staff last month. Both companies attributed these cuts to AI.

Have a tip? Contact this reporter via email at rmchan@businessinsider.com or Signal at rosal.13. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.




Source link

Watch-what-Atlassians-CEO-said-in-a-4-minute-video-on.jpeg

Watch what Atlassian’s CEO said in a 4-minute video on the company’s AI-induced layoffs

  • Atlassian has cut 1,600 jobs, roughly 10% of its workforce, as it pivots to AI.
  • CEO Mike Cannon-Brookes addressed employees in a four-minute video explaining the layoffs.
  • He said he is “deeply sorry for the disruption” the layoff creates in their lives.

Atlassian CEO Mike Cannon-Brookes addressed employees in a four-minute video explaining why the company is laying off about 1,600 workers — roughly 10% of its workforce — as it pivots more aggressively toward AI.

Cannon-Brookes said in a message on the company’s blog that the decision was difficult but necessary as AI reshapes how software companies operate. The shift isn’t simply about cutting costs, he said, but about changing the mix of skills the company needs as it builds products for the AI era.

About 30% of the affected roles are based in Australia. The Australian-American software firm was founded in 2002 by Cannon-Brookes and Scott Farquhar, both of whom are ranked among Australia’s 50 richest people by Forbes.

The layoffs come amid a wider shift across the tech industry as companies restructure for the AI era. Last month, Block slashed nearly half its workforce, citing productivity gains from AI.

Watch Cannon-Brookes’ video message to employees here:




Source link

OpenAIs-robotics-head-quits-after-companys-Pentagon-deal-This-was.jpeg

OpenAI’s robotics head quits after company’s Pentagon deal: ‘This was about principle’

Caitlin Kalinowski, a hardware executive who joined OpenAI from Meta in 2024 and leads its robotics division, said she is resigning from the company.

In a post on X on Saturday, Kalinowski criticized OpenAI’s recent deal with the Pentagon.

“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote.

She called her resignation a matter of principle, and said she still deeply respects OpenAI CEO Sam Altman and the team and is proud of their robotics work.

A spokesperson for OpenAI confirmed Kalinowski’s resignation and defended its deal with the Defense Department.

“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the spokesperson told Business Insider. “We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society, and communities around the world.”

OpenAI struck a deal with the Pentagon last week, allowing the Defense Department to use its AI products. The agreement came after its rival Anthropic refused a similar deal over concerns that the technology would be used for mass surveillance and autonomous weapons.

Anthropic has since been effectively blacklisted in Washington. President Donald Trump described the company as “radical woke” in a Truth Social post and demanded federal agencies stop using Anthropic’s technology. Secretary of Defense Pete Hegseth then designated Anthropic a supply-chain risk to national security and said Defense Department contractors would be barred from working with the company.

OpenAI’s decision to strike a deal with the Pentagon caused an immediate backlash. Some users ditched ChatGPT in protest. Anthropic’s chatbot, Claude, is now the No. 1 free app on the Apple App Store, unseating OpenAI’s ChatGPT. Claude’s US downloads increased 240% month over month in February.

Kalinowski’s exit is a setback for OpenAI’s robotics ambitions, which the company has been developing over the past year.

Over the last year, the company has quietly built a San Francisco lab that employs about 100 data collectors. Teams are training a robotic arm to do household chores as part of a broader push to build a humanoid robot. The company told employees in December it also plans to open a second lab in Richmond, California.

A source with knowledge of OpenAI’s plans also previously told Business Insider that the company is exploring several early-stage hardware initiatives — including robotics — but none are considered central to its core mission at this point.




Source link

Heres-what-current-and-former-OpenAI-employees-are-saying-about.jpeg

Here’s what current and former OpenAI employees are saying about the company’s Pentagon deal

  • OpenAI employees are publicly discussing the company’s agreement with the Department of Defense.
  • Some have called for more clarity; others say the contract includes strong protections
  • Sam Altman said OpenAI is working with the Pentagon to amend its contract after backlash.

OpenAI employees are airing their views about the company’s deal with the Pentagon.

In posts on X over the weekend, current and former staff weighed in on whether OpenAI compromised its safety principles in negotiations with the US Department of Defense — and how the agreement compares to rival Anthropic’s stance.

Last week, Sam Altman confirmed OpenAI’s deal to give the Department of Defense access to its AI models. The agreement came after Anthropic refused to accept government terms that could have allowed its model, Claude, to be deployed for mass domestic surveillance or autonomous lethal weapons.

OpenAI said in a blog post on Saturday that its contract with the Defense Department is “better” and includes more safety guardrails than Anthropic’s original contract.

On Monday evening, following concerns around the deal, Altman said on X that OpenAI is working with the Pentagon to “make some additions in our agreement.”

Here’s what OpenAI staff have to say:

Boaz Barak

Boaz Barak, a member of OpenAI’s technical staff who works on alignment and is also a Harvard computer science professor, pushed back against the idea that OpenAI had weakened safeguards.

In a post on X on Sunday, Barak said there is a narrative that Anthropic had a “wonderful contract” blocking the US government from using it for mass domestic surveillance or autonomous lethal weapons, and that OpenAI’s deal would now unleash those risks.

“It is wrong to present the OAI contract as if it is the same deal than Anthropic rejected, or even as if it is less protective of the red lines than the deal Anthropic already had in place before,” he wrote.

“Obviously I don’t know all details of what Anthropic had before, but based on what I know, it is quite likely that the contract OAI signed gives more guarantees of no usage of models for mass domestic surveillance or autonomous lethal weapons than Anthropic ever had,” he added.

In another X post on Monday, Barak said: “The red line of not using AI to do domestic mass surveillance is not Anthropic’s red line – it should be all of ours.”

Miles Brundage

Miles Brundage, OpenAI’s former head of policy research, said in a post on X on Saturday that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”

“To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics,” he added.

He later clarified on Sunday in a reply to his post that he “probably should not have said ‘caved’ in the first tweet.”

“OpenAI may very well have gotten what they wanted and, at the same time, this could have weakened Anthropic’s bargaining position since Anthropic cared about a detail OAI didn’t, and been caving from their POV,” he said.

Clive Chan

Clive Chan, a member of technical staff at OpenAI, said in a post on X on Sunday that he believes the company’s contract includes guarantees against the use of its models for mass domestic surveillance or autonomous lethal weapons. He added that he is “advocating internally to release more information” about the agreement.

“If we later learn this is not the case, then I will advocate internally to terminate the contract,” he added.

In a reply to his post, Chan acknowledged that there are likely limits on what can be publicly disclosed about defense contracts. Still, he said the company should have anticipated public concerns and prepared clearer answers in advance.

Following the publication of OpenAI’s blog post, Chan said on Sunday on X that the post “covers most” of his concerns. “Thanks to the team for being super thoughtful about the approach to this,” he added.

Mohammad Bavarian

Mohammad Bavarian, a research scientist at OpenAI, said in an X post on Monday that he doesn’t think there is an “un-crossable gap between what Anthropic wants and DoW’s demands,” adding that “with cooler heads it should be possible to cross the divide.”

The Pentagon’s designation of Anthropic as a supply chain risk is “unfair, unwise, and an extreme overreaction,” Bavarian wrote on Monday.

“Designating an organization which has contributed so much to pushing AI forward and with so much integrity does not serve the country or humanity well,” he added.

Noam Brown

Noam Brown, a researcher at OpenAI, said in an X post on Tuesday that the original language in the company’s agreement with the Department of War left “legitimate questions unanswered” — particularly around new ways AI could potentially enable lawful surveillance.

After OpenAI updated its blog post on Monday evening, Brown said “the language is now updated to address this,” but he strongly believes that “the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security.”

Brown added that deployment to the NSA and other Department of War intelligence agencies would be paused to allow time to address the potential loopholes “through the democratic process before deployment.”

“I know that legislation can sometimes be slow, but I’m afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions,” he wrote.




Source link

Kalshis-CEO-compared-his-companys-net-positive-rivalry-with-Polymarket.jpeg

Kalshi’s CEO compared his company’s ‘net positive’ rivalry with Polymarket to Tom Brady and Eli Manning

Kalshi’s CEO says his company’s rivalry with Polymarket has parallels to two sets of sporting legends.

In an episode of the “20VC” podcast released on Monday, Tarek Mansour explained how prediction market rival Polymarket has encouraged his company to work harder.

“What I’m learning over time is that an industry truly becomes an industry when there’s a rivalry, because that rivalry will push you beyond the limits of what you thought you could get to,” Mansour said.

He compared the companies to National Football League quarterbacks Tom Brady and Eli Manning.

“When Tom Brady kind of reflected on that back in the day, he’s like, ‘You know, we were like the most ferocious on the field, and we fought each other,'” Mansour said. “But then over time, he became grateful for that because he realized that without Manning being in there and vice versa, he would have never achieved what he achieved.”

“I think that’s happening in prediction markets,” he added.

Kalshi, founded in 2018, lets users bet on the outcome of events such as elections, sports matches, and economic indicators. Last week, it announced partnerships with media giants CNN and CNBC, and said that it raised $1 billion at a valuation of $11 billion.

Polymarket, its blockchain-enabled competitor, was founded in 2020 and offers similar services. It was last valued at $13.5 billion in November, per PitchBook.

The popularity of prediction platforms has exploded since a legal victory for Kalshi in the US last fall. Now, users can bet on questions ranging from the popularity of Labubu dolls to Elon Musk’s net worth.

Last year, Mansour said in an interview that his employees asked social media influencers to promote memes about an FBI raid on the home of Polymarket CEO Shayne Coplan. On Monday’s podcast, Mansour called the move a “mistake” and said he “made clear to the team: ‘Don’t ever do this again.'”

Mansour also compared the two companies to soccer stars Lionel Messi and Cristiano Ronaldo, and said that it was not a coincidence that the two “greatest” players exist in the same era.

“Without Polymarket, we wouldn’t have pushed our marketing and pushed our product as hard,” he said. “That sort of infighting is going to push both of us to scale this industry and reach heights that we honestly wouldn’t have been able to otherwise, which long-term is actually net positive for the customer.”

Polymarket did not immediately respond to Business Insider’s request for comment about Mansour’s comparisons.




Source link