Lovable-exec-says-big-boys-and-girls-like-OpenAI-and.jpeg

Lovable exec says ‘big boys and girls’ like OpenAI and Anthropic worry her more than other vibe coding startups

Other vibe coding players are not the biggest competition, says one Lovable exec.

“I always worry about the big boys and girls in the world,” Lovable’s head of growth Elena Verna said on a Sunday episode of the “20VC” podcast. “So, OpenAIs, Anthropics, Googles, Apples, more so than our competitors that spring up from the bottom or from sideways.”

This is because the distribution power of these tech giants and frontier labs in the market is unparalleled, she said.

Stockholm-based Lovable was valued at $6.6 billion in a December funding round led by CapitalG and Menlo Ventures. It competes with other vibe coding startups like Cursor, Replit, and Emergent, as well as far bigger and better-funded players, including OpenAI, Anthropic, and Microsoft, that make their own AI coding tools.

Verna, who joined the startup last May after a series of advisory and head of growth stints at various startups, said that in a world where products are becoming increasingly similar, distribution and growth are winning strategies.

“Whoever has the best distribution that is earned, that is competitively defensible, that is sustainable, that is predictable, is going to be the winner in the market,” she said. “I worry about the companies that have that figured out.”

Verna’s comments about competition follow a period of brutal comparisons between products made by vibe coding startups and Anthropic’s Claude Code.

After Anthropic released its latest model, Opus 4.6, founders and developers said on X that they are ditching their expensive Cursor and Lovable subscriptions for Claude Code.

Still, Lovable is going strong.

The Swedish startup’s annual recurring revenue has surged by more than 30%, from $300 million to $400 million in a single month, Business Insider reported. ARR, a key metric to gauge startup performance, refers to the predictable revenue a company expects to generate over a year.

Lovable’s chief revenue officer, Ryan Meadows, told Business Insider that the company plans to more than double its head count by the end of 2026, from 146 to 350 employees.

He added that Lovable, which specializes in making coding user-friendly, sees at least 200,000 new vibe coding projects created each day.




Source link

Chong Ming Lee, Junior News Reporter at Business Insider's Singapore bureau.

I work at Meta’s Superintelligence Labs and used to be at OpenAI. Here’s what the job is like — and what I’ve learned.

This as-told-to essay is based on a conversation with Prakhar Agarwal, an applied researcher at Meta Superintelligence Labs who previously worked at OpenAI. The following has been edited for length and clarity. Business Insider has verified his employment and academic history.

My day-to-day varies a lot depending on what stage of the project we are in versus what the immediate deliverables are.

At OpenAI and Meta, you have these milestones — say, a big training or reinforcement-learning run — in 10 months. It gets intense when we’re approaching the deadline.

Whatever work I identify is always based on the current iteration of the model. If I say the model isn’t good at X and my solution helps fix X, it is based on that version of the model. If I miss the deadline, I don’t know whether the next version will have the same issues or not.

If we are further away from that deadline, then we’re mostly working on evaluations and trying to find failure cases and issues with the existing model.

The work is super dynamic. Sometimes you think something is super easy and you’ll get it done in a day. Other times, it’s the opposite — because there are so many unknowns, it might take a week.

Working at frontier labs feels very different from Big Tech

What we’re limited by in these foundational labs is compute. It’s not like Big Tech or other places where you can keep hiring a bunch of people and give them small pieces of a task to do.

Everyone needs compute to actually do something, and as soon as you have a lot of people, the compute gets divided, so no one will be able to do anything.

You also want high-bandwidth communication between stakeholders — you don’t want 10 different layers of communication. The speed of iteration is much faster. These core groups tend to be much smaller and tighter.

The idea of a “team” is also very fluid. Each person has their own projects, but they collaborate with others to work on joint projects. At Meta and OpenAI, there are a lot of senior people and not a lot of junior people, so everyone has a decent scope of projects.

Sometimes I collaborate more with people outside my immediate team than within it. Your scope isn’t restricted to four or five people. Your scope is the problem you’re trying to solve.

Communication and going deep with coding are key

Communication is the most important aspect in these labs. Because a lot of things aren’t documented, you need to be able to articulate what you’re doing, why you’re doing it, what the next steps are, convey your results, and get feedback on your work.

Becoming comfortable going through the code and identifying the specifics is one of the most important skills I’ve seen. The speed at which the code evolves is much faster than the documentation. If you’re stuck on something, read the code and try to understand it yourself.

Having some understanding of what’s happening across different verticals also gives you a good overview of the ideas and approaches people are trying. Because everything is super related, you might learn something from there or find ways to contribute.

The biggest advantage these labs have is knowing what doesn’t work

A research paper tells you, “I did X, Y, and Z in this specific order, and it works.” But what you don’t see is that before doing X, Y, and Z, I tried 50 different things that didn’t work — and people don’t talk about that.

That, to me, is the real strength of these foundation labs. Because of all the experimentation and all the work that has already been done, the teams have built really strong intuitions. They know which things won’t work or won’t scale, and which are going to work well.

People outside often look for the gains, but they miss the point that even the misses are very valuable.

Advice for those who want to work in top labs

I don’t have a good answer for managing burnout. You’re pretty much just going with the flow. You’re working at the cutting edge, and to put it simply, if you want to be here, you can’t think about it on a strict day-to-day basis.

What I would tell my younger self is to be comfortable exploring new avenues and new ideas. What I’ve seen is that we try to play to our strengths or stay in a deterministic setting where we know we’ll do fine. But in these domains, the speed at which things are moving is so fast that you need to be able to switch to a new topic.

Build the muscle to handle being thrown into something completely new. Sometimes, it’s more psychological than a skill issue.

Do you have a story to share about working at a top AI lab? Contact this reporter at cmlee@businessinsider.com.




Source link

Heres-what-current-and-former-OpenAI-employees-are-saying-about.jpeg

Here’s what current and former OpenAI employees are saying about the company’s Pentagon deal

  • OpenAI employees are publicly discussing the company’s agreement with the Department of Defense.
  • Some have called for more clarity; others say the contract includes strong protections
  • Sam Altman said OpenAI is working with the Pentagon to amend its contract after backlash.

OpenAI employees are airing their views about the company’s deal with the Pentagon.

In posts on X over the weekend, current and former staff weighed in on whether OpenAI compromised its safety principles in negotiations with the US Department of Defense — and how the agreement compares to rival Anthropic’s stance.

Last week, Sam Altman confirmed OpenAI’s deal to give the Department of Defense access to its AI models. The agreement came after Anthropic refused to accept government terms that could have allowed its model, Claude, to be deployed for mass domestic surveillance or autonomous lethal weapons.

OpenAI said in a blog post on Saturday that its contract with the Defense Department is “better” and includes more safety guardrails than Anthropic’s original contract.

On Monday evening, following concerns around the deal, Altman said on X that OpenAI is working with the Pentagon to “make some additions in our agreement.”

Here’s what OpenAI staff have to say:

Boaz Barak

Boaz Barak, a member of OpenAI’s technical staff who works on alignment and is also a Harvard computer science professor, pushed back against the idea that OpenAI had weakened safeguards.

In a post on X on Sunday, Barak said there is a narrative that Anthropic had a “wonderful contract” blocking the US government from using it for mass domestic surveillance or autonomous lethal weapons, and that OpenAI’s deal would now unleash those risks.

“It is wrong to present the OAI contract as if it is the same deal than Anthropic rejected, or even as if it is less protective of the red lines than the deal Anthropic already had in place before,” he wrote.

“Obviously I don’t know all details of what Anthropic had before, but based on what I know, it is quite likely that the contract OAI signed gives more guarantees of no usage of models for mass domestic surveillance or autonomous lethal weapons than Anthropic ever had,” he added.

In another X post on Monday, Barak said: “The red line of not using AI to do domestic mass surveillance is not Anthropic’s red line – it should be all of ours.”

Miles Brundage

Miles Brundage, OpenAI’s former head of policy research, said in a post on X on Saturday that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”

“To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics,” he added.

He later clarified on Sunday in a reply to his post that he “probably should not have said ‘caved’ in the first tweet.”

“OpenAI may very well have gotten what they wanted and, at the same time, this could have weakened Anthropic’s bargaining position since Anthropic cared about a detail OAI didn’t, and been caving from their POV,” he said.

Clive Chan

Clive Chan, a member of technical staff at OpenAI, said in a post on X on Sunday that he believes the company’s contract includes guarantees against the use of its models for mass domestic surveillance or autonomous lethal weapons. He added that he is “advocating internally to release more information” about the agreement.

“If we later learn this is not the case, then I will advocate internally to terminate the contract,” he added.

In a reply to his post, Chan acknowledged that there are likely limits on what can be publicly disclosed about defense contracts. Still, he said the company should have anticipated public concerns and prepared clearer answers in advance.

Following the publication of OpenAI’s blog post, Chan said on Sunday on X that the post “covers most” of his concerns. “Thanks to the team for being super thoughtful about the approach to this,” he added.

Mohammad Bavarian

Mohammad Bavarian, a research scientist at OpenAI, said in an X post on Monday that he doesn’t think there is an “un-crossable gap between what Anthropic wants and DoW’s demands,” adding that “with cooler heads it should be possible to cross the divide.”

The Pentagon’s designation of Anthropic as a supply chain risk is “unfair, unwise, and an extreme overreaction,” Bavarian wrote on Monday.

“Designating an organization which has contributed so much to pushing AI forward and with so much integrity does not serve the country or humanity well,” he added.

Noam Brown

Noam Brown, a researcher at OpenAI, said in an X post on Tuesday that the original language in the company’s agreement with the Department of War left “legitimate questions unanswered” — particularly around new ways AI could potentially enable lawful surveillance.

After OpenAI updated its blog post on Monday evening, Brown said “the language is now updated to address this,” but he strongly believes that “the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security.”

Brown added that deployment to the NSA and other Department of War intelligence agencies would be paused to allow time to address the potential loopholes “through the democratic process before deployment.”

“I know that legislation can sometimes be slow, but I’m afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions,” he wrote.




Source link

Sam-Altman-says-OpenAI-will-tweak-its-Pentagon-deal-after.jpeg

Sam Altman says OpenAI will tweak its Pentagon deal after surveillance backlash

OpenAI said it is amending its contract with the Pentagon.

After public concerns that OpenAI’s new deal with the Pentagon would allow the government to use its AI for mass surveillance, CEO Sam Altman posted an internal memo to X on Monday evening, saying that the company is working with the Pentagon to “make some additions in our agreement.”

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of US persons and nationals,” Altman wrote on X.

“The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract,” Altman added.

Altman’s memo came after OpenAI struck a deal with the Pentagon on Friday to deploy its AI models on classified military networks. The contract stepped into a standoff between the Pentagon and Anthropic and happened just a day before the US struck Iran.

In his note, Altman said that he got things “wrong,” saying the company should not have “rushed” to seal the deal.

“The issues are super complex, and demand clear communication,” he said. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Hours before the OpenAI deal was announced, President Donald Trump ordered federal agencies to halt use of Anthropic’s Claude system, following a breakdown in talks over the military use of AI. Anthropic had specific red lines: explicit contractual bans on mass domestic surveillance and fully autonomous weapons, which are systems capable of killing without human oversight.

As of Friday, nearly 500 OpenAI and Google employees signed on to an open letter in support of Anthropic’s decision.

The OpenAI deal soon triggered backlash and concerns that OpenAI’s tools would be used for domestic surveillance or for lethal autonomous weapons, claims which Altman immediately disputed. Protests took place in front of the OpenAI office in San Francisco and London, and QuitGPT, an advocacy group against OpenAI, has launched a boycott and organized a protest scheduled for Tuesday.

Anthropic did not immediately respond to a request for comments.




Source link

Katherine Tangalakis-Lippert's face on a white background

OpenAI shares its contract language and ‘red lines’ in agreement with the Department of War

OpenAI says its agreement with the Department of War is “better” and has more safety guardrails than the one Anthropic was blacklisted for refusing to comply with.

In a blog post published Saturday, OpenAI shared some contract language from its agreement with the Department of War, including clauses that indicate its tech cannot be used for mass domestic surveillance or to power autonomous weapons or high-stakes decision systems like “social credit” scores.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” OpenAI’s post read. “In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.”

OpenAI CEO Sam Altman took to social media shortly after the company’s blog post was published, answering questions from users concerned about the nature of OpenAI’s agreement with the government.

In Ask-Me-Anything-style responses, he doubled down on OpenAI’s agreement being better than Anthropic’s, not just for the broader AI landscape but also for the American people.

“Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” Altman wrote in response to a question about why OpenAI agreed to partner with the government when its rival would not. “I think Anthropic may have wanted more operational control than we did.”

OpenAI’s agreement with the federal government comes on the heels of Anthropic being blacklisted and declared a supply chain risk after refusing to comply with the military’s terms of use for the company’s frontier model, Claude.

Anthropic, in a Friday statement, said that “no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons” and vowed to “challenge any supply chain risk designation in court.”

OpenAI, in its Saturday post, argued that Anthropic should not be designated as a supply chain risk and said it had made its position “clear to the government.” Its agreement with the Department of War stemmed, in part, from a desire to “de-escalate things between DoW and the US AI labs.”

“A good future is going to require real and deep collaboration between the government and the AI labs,” OpenAI’s post reads. “As part of our deal here, we asked that the same terms be made available to all AI labs, and specifically that the government would try to resolve things with Anthropic; the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs.”

Representatives for OpenAI and Anthropic did not immediately respond to requests for comment from Business Insider. It was not immediately clear whether Anthropic, or any other leading AI company, had been offered similar contractual terms to those that OpenAI said it had agreed to.

OpenAI said that, as part of its deal with the Department of War, it will maintain “full control” over the safety stack it deploys, and robust “safety guardrails” to prevent misuse. Should the government violate the terms of the agreement, OpenAI said it “could” terminate the contract.

“We don’t expect that to happen,” OpenAI said in its post.

Altman, in his Ask Me Anything posts, wrote that OpenAI would not agree to allow the government to use its technology for mass domestic surveillance “because it violates the constitution.”

He added that he is prepared for a potential dispute over the legality of specific governmental requests in the future, but added that if the Constitution were amended to make such surveillance legal, “Maybe I would quit my job.”

“I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the constitution,” Altman wrote. “I am terrified of a world where AI companies act like they have more power than the government. I would also be terrified of a world where our government decided mass domestic surveillance was ok. I don’t know how I’d come to work every day if that were the state of the country/Constitution.”

The dispute between the government and the AI giants has sparked widespread criticism, with critics concerned about the ethical implications of the Department of War’s use of AI and OpenAI’s agreement to provide the government access to its technology.

OpenAI on Saturday said it believes AI will “introduce new risks in the world” and, by allowing the government use of its models, will give people defending national security “the best tools” to do so.

Business Insider previously reported that Anthropic’s model, Claude, shot to the top of the app store on Saturday, and many people on social media, including celebrities like Katy Perry, have publicly posted about canceling their ChatGPT subscriptions in the wake of OpenAI’s agreement with the government.




Source link

OpenAI-just-hired-another-employee-from-Mira-Muratis-Thinking-Machines.jpeg

OpenAI just hired another employee from Mira Murati’s Thinking Machines Lab

Another employee at Thinking Machines Lab is leaving to rejoin OpenAI.

It’s the latest in a string of departures from the $12 billion AI startup, which is led by former OpenAI CTO Mira Murati and lately has been the subject of high-profile poaching campaigns from bigger tech companies.

The latest employee to go back to OpenAI is Jolene Parish, who joined Thinking Machines Lab in April last year, according to her LinkedIn profile. She had worked at OpenAI for three years prior. Before that, she worked for 10 years on security at Apple, her profile says.

Other employees rejoined OpenAI last month. Two co-founders, former CTO Barret Zoph and Luke Metz, both left, along with researcher Sam Schoenholz.

Lia Guy, another researcher, also rejoined OpenAI, The Information reported. Another cofounder, Andrew Tulloch, left for Meta late last year, The Wall Street Journal reported.

OpenAI and Thinking Machines Lab declined to comment.

Thinking Machines Lab raised a monster $2 billion funding round last year, valuing the company at $12 billion, spokespeople said at the time. The startup launched its first product, Tinker, last October.

The San Francisco-based company has become known for attracting star-studded talent. It quietly hired Neal Wu, a legendary coder who won three gold medals in an Olympiad for programming, and Soumith Chintala, the creator of the open-source AI project PyTorch at Meta, who is now Thinking Machines Lab’s CTO, Business Insider previously reported.

Have a tip? Contact this reporter via email at crollet@businessinsider.com or on Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; heres our guide to sharing information securely.




Source link

OpenAI-Meta-and-Apples-latest-battle-Breaking-your-phone-addiction.jpeg

OpenAI, Meta, and Apple’s latest battle: Breaking your phone addiction

The average American picks up their phone more than 200 times a day. Teens are pinged with some 250 notifications a day — during school, after school, and overnight. The apps meant to prevent you from checking apps have done little to stop the problem. Now, some of the tech companies that helped create our screen dependence are trying to disrupt it.

Later this year, OpenAI plans to debut a small, screenless device that Sam Altman describes as more “peaceful” than a smartphone. Apple, the Oz of screentime, is developing smart glasses, a pin, and AirPods with more AI built in, according to a Tuesday report from Bloomberg, with the rumored pendants featuring microphones and cameras to be the “eyes and ears” of the iPhone. Meta has teased its fully augmented reality Orion glasses since 2024. While that device doesn’t have a release date, the company last year sold some 7 million pairs of its smart glasses, which is the start of the post-smartphone future Mark Zuckerberg has predicted. Eventual smart specs could be more screen all-the-time than screenless, but they also rely on AI to make the experience much more hands-free than swiping and scrolling on a phone.

Could AI be what finally breaks our phone addiction?

Since 2007, no device out of Silicon Valley has captured universal imagination the way Steve Jobs did when he put your iPod, your phone, and the internet together on a 3.5-inch screen. Competitors have tried for a decade-plus to get people to shift us from the iPhone to smart glasses, and largely failed. The awe around smartphones has turned to derision, as excessive screen time is linked to disrupted sleep, anxiety, and fractured attention. Now, developers are hoping the AI boom can give us the next big thing.

Beating the smartphone would mean replacing a device that 91% of American adults now carry — a device for which millions of apps have been developed and people now depend on in lieu of wallets and cameras and health monitors. New AI devices can’t just copy what smartphones do, says Ramon Llamas, a research director at a technology intelligence firm IDC: They have to show they have a solution to an everyday problem. If they don’t, Llama says, “these things are just gonna really end up as solutions looking for a problem to solve.”


Critiques of screen time can be as blunt and smoothbrained as what the critics say excessive screen time makes you. A seven-hour daily log may seem like a staggering amount of dependence, but what did the person spend those seven hours doing? Doomscrolling late into the night, or FaceTiming with a far-away friend? With AI wearables, there’s the risk of becoming dependent on the device for different reasons.

“The screen may not be there, but what’s getting filled in the back is already this problem of AI companionship,” says Olivia Gambelin, an AI ethicist and author of the book “Responsible AI.” An AI device designed to do something very specific — like listen to a meeting and then send follow-up emails or messages related to action points discussed — could save people time and keep them from writing tedious emails and Slack messages from their desk. But that same device listening in to personal conversations with family and friends could compromise a relationship, and erode the positive effects that texting a friend to check-in can have on both people (already, my friends are tiring of AI summaries on the iPhone that summarize our group text and become an intermediary into our threads of gossip and jokes in the name of efficiency). Wearing microphones and cameras to social interactions and into businesses is likely to really weird out some of the people around you. More people are entering into romantic, dependent relationships with AI companions, and a swell of loud dissenters are criticizing the technology for taking jobs and attempting to replicate human relationships.

But OpenAI is betting that it can package its technology in a device in a way that calms the user. “When I use current devices or most applications, I feel like I am walking through Times Square in New York and constantly just dealing with all the little indignities along the way,” Altman said in November. OpenAI’s device, he said, would be less Time Square, more “sitting in the most beautiful cabin by a lake and in the mountains and sort of just enjoying the peace and calm.” That’s because the AI device would learn “contextual awareness of your whole life,” and when best to send you alerts.

The screen itself may not be the problem; it’s what’s summoning us to the screen.

Other AI wearables have failed by falling short of that goal. Humane AI sold a wearable pin, priced at $700 plus a monthly fee to connect it, but pulled it from the market a year ago. It failed perhaps because it tried too hard to replace our phones — it didn’t interact with them, but provided a shoddy replacement. Novelty wasn’t a factor that could outshine usability. The AI Friend pendant, which can’t search the internet or help with tasks outside of sending reminders and acts instead as an eavesdropping sycophant around its user’s neck, was mocked relentlessly and sold just a few thousand devices after it hit the market last year.

Companies trying to make AI hardware should focus on “transformative features,” Jason Low, research director at Omdia, tells me in an email. AI wearables must be more than “marginally more convenient,” should integrate with our existing products, and have a clear, stated value. For example, glasses that provide real-time language translation or devices for fitness and health tracking offer features our smartphones can’t do as well. The Oura ring continues to grow in popularity, particularly among women after starting out as a niche tech bro buy, for the novel insights it can offer; the company announced last fall it has sold 5.5 million rings since 2015, with more than 2.5 million sold between June 2024 and September 2025. “These devices often deliver a more polished user experience compared to general-purpose, do-it-all AI devices,” Low says.

Llamas tells me that the AI functions of a wearable have to be “contextual, personalized, and actionable,” like reminding the wearer to send birthday flowers or responding accurately to being asked to direct the user to the nearest Starbucks. A first attempt device shouldn’t try to replace the smartphone, but to integrate with the Apple or Google ecosystems, he says. Apple and OpenAI did not respond to requests for comment about their rumored products for this story.

If anything has hyped Silicon Valley like the iPhone, it’s been AI. But three years after the mainstream adoption of ChatGPT, the value generative AI in the white collar workforce has yet to be fully realized. That could make a product for consumers a hard sell, too. “Some of the overwhelm that’s coming with AI that I see in general users is you can use it for everything, or it’s promoted that way, which is actually quite stifling,” Gambelin says.

In our quest to find a peaceful equilibrium with tech, the screen itself may not be the problem; it’s what’s summoning us to the screen. Its bright colors, games, and infinite scroll give quick dopamine hits that entice us to stay glued to it. But much of what pings my phone throughout the day are useless notifications trying to get me to reopen one of the dozens of apps — a markdown moment on a clothing thrifting app, a like on the Instagram story I’ve posted of my dog from my best friend, and ironically, a report of how much time I’ve already logged. There’s a relentless business model at play to keep us on these apps. No screens would mean no infinite scroll through TikTok, no Candy Crush — but app developers and companies may need to find new ways to reach people if wearables caught on, and an always-there AI device and companion might not be as peaceful as Altman describes. Our collective screen time is a problem, but the AI wearable will have to surprise us all with something novel to be useful.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.




Source link

OpenClaw-creator-says-Europes-stifling-regulations-are-why-hes-moving.jpeg

OpenClaw creator says Europe’s stifling regulations are why he’s moving to the US to join OpenAI

In Europe, there’s been a lot of handwringing over why there are very few large, successful tech companies in the region. Peter Steinberger, the creator of the agentic AI hit OpenClaw, has an answer.

Steinberger was recently hired by OpenAI and is moving from Europe to the US. An Austrian by birth, he previously split his time between London and Vienna.

On X, a professor from a European university asked why Europe couldn’t retain this tech talent.

Steinberger replied that most people in the US are enthusiastic, while in Europe, he’s scolded about responsibility and regulations.

If he built a company in Europe, he would struggle with strict labor regulations and similar rules, he added.

At OpenAI, he said most employees work 6 to 7 days a week and are paid accordingly. In Europe, that would be illegal, he added.

The most valuable company in Europe is Dutch chip-equipment maker ASML, valued at about $550 billion. In contrast, there are 10 US companies worth more than $1 trillion. Most of these are tech companies.

In 2024, a landmark EU report found that the region had fallen behind the US, particularly in innovation. It proposed a series of changes to tackle the problem, but by the end of 2025, few of the recommendations had been implemented.

Steinberger said he was hopeful about EU INC, an effort to create a single corporate legal framework to make it simpler to run a business across the region.

But this seems to be “fizzling out,” he wrote on X. “Watered down, too much egoistic national interest that ultimately hurts everyone.”

Sign up for BI’s Tech Memo newsletter here. Reach out to me via email at abarr@businessinsider.com.




Source link

Headshot of Ben Shimkus

Elon Musk and OpenAI posture over pizza as the AI talent war heats up

The rivalry between xAI and OpenAI is heating up again — this time, over wood-fired pizza.

Over the weekend, Elon Musk and an OpenAI engineer jockeyed on X about wood-fired crusts, dough fermentation, and campus chefs.

On its face, it was a lighthearted back-and-forth about free pizza for lunch. Underneath, it encapsulates a trend playing out in Silicon Valley: rival AI companies are publicly pitching culture — and perks like free lunch — in the talent war for top engineers.

The exchange began when Musk reposted a video of an xAI engineer calling his job the “opportunity of a lifetime.”

“Join @xAI,” Musk wrote.

The post quickly drew a response from xAI’s competitor, OpenAI.

“Or join Codex,” said Thibault Sottiaux, an engineering lead working on OpenAI’s Codex software agent, who is also hiring. OpenAI operates “with much of the same principles,” he wrote — before adding an increasingly common recruitment pitch.

“Join the bright side, we have pizza,” Sottiaux wrote.

Musk fired back: “But how good is your wood oven pizza?”

The pizza posturing then shifted to ingredients — and the corporate chefs preparing them.

“But how about the dough?” he wrote back. “Can’t take shortcuts, needs 24 hours at least. And our chef is 🔥.”

“Our chef is so good that God looked down at the food from heaven and said you my most delicious creation,” Musk replied.

“And after having a bite, he wasn’t 100% satisfied and asked our chef to improve upon the SoTA,” Sottiaux said. “Our chef delivered, and created a recipe now universally credited to accelerating the AGI timeline.”

The very real fight behind the pizza posts

The tomato pie-based banter was sweet — but the subtext was spicier.

AI labs are locked in a high-stakes dash for elite engineers, with high-end compensation packages stretching into the nine-figure territory.

Companies including Amazon, Microsoft, Meta, OpenAI, and Musk’s xAI are competing for a relatively small pool of researchers capable of building the next generation of models and infrastructure.

Aside from money, two key perks have emerged in the AI talent wars, according to professional AI poacher Mark Zuckerberg: access to GPUs and fewer direct reports.

“People say, ‘I want the fewest number of people reporting to me and the most GPUs,'” Zuckerberg said in 2025 TITV interview.

At the same time, the broader tech industry has pulled back on many of the pre-pandemic perks amid cost-cutting. Remote work has narrowed, layoffs have gathered steam, and perks like pet care stipends and expansive wellness benefits are becoming less common for new hires.

But there’s one perk that has remained: the fancy lunch spread.

Might as well throw in wood-fired pizza, too.




Source link

Sam-Altman-says-OpenClaw-creator-Peter-Steinberger-is-joining-OpenAI.jpeg

Sam Altman says OpenClaw creator Peter Steinberger is joining OpenAI to build next-gen personal agents

  • Sam Altman says OpenClaw creator Peter Steinberger is joining OpenAI.
  • OpenClaw is a viral AI agent launched last month.
  • Altman said Steinberger will build “next generation” AI agents at OpenAI.

OpenAI just scored a win in the AI talent wars.

Sam Altman said Sunday on X that Peter Steinberger, the creator of OpenClaw, the viral AI agent powering the agent-only social network Moltbook, is joining OpenAI.

Altman said Steinberger would build the “next generation” of personal AI agents at the company.

“He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people,” Altman said about Steinberger. “We expect this will quickly become core to our product offerings.”

Altman added that OpenClaw, which was for a brief moment in time known as Moltbot and then Clawdbot before Anthropic took notice, will live on as an open-source project supported by OpenAI.

“The future is going to be extremely multi-agent and it’s important to us to support open source as part of that,” he wrote.

Steinberger, previously best known for founding the PDF processing company PSPDFKit, came out of retirement to launch OpenClaw in late 2025.

He is likely to bring a new perspective to OpenAI’s race to develop artificial general intelligence. Steinberger said he believes AGI is best as a specialized form of intelligence rather than a generalized one.

“What can one human being actually achieve? Do you think one human being could make an iPhone or one human being could go to space?” Steinberger said on a Y Combinator podcast in February. “As a group we specialize, as a larger society we specialize even more.”




Source link