They’re dropping sidequests like Sora, and trying to catch up to rival Anthropic, which has a booming business selling tools to coders.
But OpenAI still wants to make money from people who will never give it a dime. It wants to do that by showing them ads.
And this chart from analysts at MoffettNathanson explains why:
ChatGPT has lots of users, but only a sliver of them are paying for the service. The rest could see ads.
MoffettNathanson
It’s a simple argument, but I’ll spell it out here: As of January, OpenAI’s ChatGPT had some 900 million users. But the vast majority of them — 850 million — pay very little or nothing at all to use the services.* So OpenAI wants to turn those zero-to-little revenue-generating users into reliable revenue generators by showing them ads.
That’s it. That’s the post.
But, since you are still here: While OpenAI says its barely hatched ad program is already generating results — the company has said it’s on track to generate $100 million a year in revenue, just two months into its ad launch — it still has a very long way to go.
The company is just beginning to build out the team and tech it will need to run a truly meaningful ad business — it just hired a top Meta exec to run sales — and for quite some time, ChatGPT ads are likely to be something ad buyers experiment with, but don’t rely on. Analyst Michael Nathanson says that while the company has been looking to charge advertisers $60 for every 1,000 impressions, it has been settling for something closer to $15 per 1,000 as it gets up and running.
Many other OpenAI staffers have also publicly criticized the company’s Pentagon deal.
“i personally don’t think this deal was worth it,” Aidan McLaughlin, a research scientist at OpenAI, wrote on X.
Another employee told CNN that many of them “really respect” Anthropic for refusing the Pentagon’s deal.
Clive Chan, a technical staffer, wrote in an X post that he believed OpenAI’s contract barred the use of its models for mass weapons or mass domestic surveillance. Chan wrote that he’s advocating for the company to share more information.
“If we later learn this is not the case, then I will advocate internally to terminate the contract,” Chan wrote.
Even before the deal, nearly 900 former and current OpenAI and Google staffers signed a joint petition supporting Anthropic, one of their primary competitors, and opposing the use of their companies’ technology for weapons that can kill without human oversight and mass surveillance.
“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the petition said.
Caitlin Kalinowski, a hardware executive who joined OpenAI from Meta in 2024 and leads its robotics division, said she is resigning from the company.
In a post on X on Saturday, Kalinowski criticized OpenAI’s recent deal with the Pentagon.
“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote.
She called her resignation a matter of principle, and said she still deeply respects OpenAI CEO Sam Altman and the team and is proud of their robotics work.
A spokesperson for OpenAI confirmed Kalinowski’s resignation and defended its deal with the Defense Department.
“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the spokesperson told Business Insider. “We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society, and communities around the world.”
OpenAI struck a deal with the Pentagon last week, allowing the Defense Department to use its AI products. The agreement came after its rival Anthropic refused a similar deal over concerns that the technology would be used for mass surveillance and autonomous weapons.
Anthropic has since been effectively blacklisted in Washington. President Donald Trump described the company as “radical woke” in a Truth Social post and demanded federal agencies stop using Anthropic’s technology. Secretary of Defense Pete Hegseth then designated Anthropic a supply-chain risk to national security and said Defense Department contractors would be barred from working with the company.
OpenAI’s decision to strike a deal with the Pentagon caused an immediate backlash. Some users ditched ChatGPT in protest. Anthropic’s chatbot, Claude, is now the No. 1 free app on the Apple App Store, unseating OpenAI’s ChatGPT. Claude’s US downloads increased 240% month over month in February.
Kalinowski’s exit is a setback for OpenAI’s robotics ambitions, which the company has been developing over the past year.
Over the last year, the company has quietly built a San Francisco lab that employs about 100 data collectors. Teams are training a robotic arm to do household chores as part of a broader push to build a humanoid robot. The company told employees in December it also plans to open a second lab in Richmond, California.
A source with knowledge of OpenAI’s plans also previously told Business Insider that the company is exploring several early-stage hardware initiatives — including robotics — but none are considered central to its core mission at this point.
Some people are angry with OpenAI, and it’s about more than just the company’s deal with the Pentagon.
On Tuesday evening, I visited the OpenAI headquarters in Mission Bay, San Francisco, and I was met with a relatively small but energetic and diverse group of protesters, each with very different demands. This protest was part of the nascent QuitGPT movement; between 40 and 50 people attended, holding signs and chalking hundreds of slogans on the sidewalk.
OpenAI triggered widespread backlash when it signed a contract with the Pentagon on Friday, hours after President Donald Trump ordered federal agencies to halt use of Anthropic’s Claude. The negotiation between the Pentagon and Anthropic had broken down because the OpenAI rival sought contractual guarantees against mass surveillance and fully autonomous weapons, its CEO said in a statement.
The backlash against OpenAI sparked a wave of support for Anthropic, including Katy Perry, who publicly endorsed and subscribed to Claude AI. Calls to abandon ChatGPT in favor of Claude spread rapidly across social media, and the momentum showed up in the download charts. Claude shot to No. 1 in the App Store on February 28, up from sixth place.
Aside from OpenAI’s deal with the Pentagon, protesters have a laundry list of other grievances, including how resource-intensive data centers are and AI’s erosion of human creativity.
Manuel Orbegozo for BI
OpenAI CEO Sam Altman posted an internal memo to X on Monday and said the company is revising its contract with the Pentagon to add explicit protections, including a prohibition on surveilling US persons and nationals and a bar on use by intelligence agencies such as the NSA without a separate contract modification. Altman also acknowledged the rushed rollout in his memo, admitting the company “got things wrong” and that the deal “looked opportunistic and sloppy.” The Pentagon did not respond to Business Insider’s questions about the amended deal.
I talked to six at Tuesday’s protest, who were skeptical of the revised deal, but there were also broader concerns about AI’s rapid rise and the tech industry.
Many attendees were there for climate concerns
Many protesters said they were there over concerns that data center will exacerbate the climate crisis.
Manuel Orbegozo for BI
Wearing a shirt that reads “we have a right to good jobs and a livable future,” Perrin Milliken told me that she has always been a climate advocate and is here to oppose data centers, which she said are an act that puts the need for AI before human needs.
“AI is taking water from communities, polluting communities, and it is also increasing communities’ electricity bills,” Milliken said.
“They’re not even paying for it — we are,” Milliken added of tech companies.
“I want water to drink, not AI to think,” reads a sign held up by a protester.
Tech companies are becoming symbols of wealth inequality
Many protest signs target wealth inequality and call tech billionaires “oligarchs.”
Manuel Orbegozo for BI
Sarah Gao, who took to the stage to speak, expressed disapproval of billionaires and the resources they take up.
“Sam Altman lives in a super villain’s mansion here in San Francisco,” Guo told the crowd, which immediately booed. “In a city that struggles with affordable housing, his sprawling compound features an underground to house luxury cars, an art gallery, a stand-alone spa cottage, and occupies an entire city block.”
“Sam and his billionaire buddies helped Trump with his disastrous budget bills that stole trillions of dollars from everyday Americans just to line their pockets,” Guo added.
Behind Guo, signs that call the tech industry “big trouble for humanity” and the billionaire CEOs “oligarchs” stood tall.
Some are rejecting AI entirely on principle
Meghan Matson said she refuses to participate in using AI and has always felt like AI is “bad news.”
Manuel Orbegozo for BI
When I spoke to Meghan Matson, she told me that she has completely rejected using AI and felt like it was “bad news” from the start.
“I know that AI is participating with me, but I’m not participating with AI,” said Matson.
“As soon as I saw it start showing up in visuals and imagery, I could see exactly where it heads,” Matson added. “It destroys journalism, it destroys art, it destroys the expression of our common humanity.”
“Stop AI stealing art, writing, electricity, water, jobs,” reads a large chalk writing on the street in front of the OpenAI office.
At least one participant was a tech worker unhappy with how their work is used
The 26-year-old who works in the tech industry loves AI, but doesn’t approve of OpenAI’s Pentagon deal.
Manuel Orbegozo for BI
“I’m an active AI user. I love AI, and I use it every day, to write, to program, to learn,” said a 26-year-old who works in the tech industry who declined to be named.
“What I don’t want is for the technologies that my friends and I build to be used to undermine the freedom we value,” he added.
He told me that he made the robot mask yesterday with a cardboard box, black duct tape, and LED lights.
“I spent $12 on this,” he said of his robot mask. “I bet a lot more people are gonna pay attention to this than OpenAI’s next million-dollar ad.”
This story is available exclusively to Business Insider
subscribers. Become an Insider
and start reading now. Have an account? .
Sam Altman went on X on Saturday night and told users to ask him anything about OpenAI’s Pentagon deal.
Altman on Friday night announced that OpenAI will work with the Pentagon and let it use its AI models.
Here are five big takeaways from Altman’s AMA session.
Sam Altman hopped onto X on Saturday night and told users to ask him anything about OpenAI’s agreement with the Pentagon.
Altman, late on Friday, announced that his company had finalized a deal with the Department of War to use its AI models. OpenAI’s deal came after Anthropic refused an ultimatum regarding the terms of use of its frontier model, Claude, for deployment in mass domestic surveillance and fully autonomous weapons.
Here are 5 big takeaways from Altman’s AMA.
The OpenAI-Pentagon deal was ‘rushed,’ and Altman knows the ‘optics’ don’t look good
The Pentagon deal was done quickly in “an attempt to de-escalate the situation,” Altman wrote on X.
He added in a separate post that the deal had been “rushed.”
Still, the “optics don’t look good” for OpenAI, he wrote.
“If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” he wrote.
“If not, we will continue to be characterized as rushed and uncareful,” he wrote.
Altman added that he sees “promising signs” for where this will all land for OpenAI.
OpenAI took the Pentagon deal because it ‘got comfortable’ with the ‘contract language’
Altman was asked why the Department of War went with OpenAI over Anthropic. He said he wouldn’t speak for his competitor, but did speculate on why OpenAI got the contract inked first.
“First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one,” Altman wrote. “I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.”
He added that OpenAI and the Department of War “got comfortable with the contractual language” as well.
“I think Anthropic may have wanted more operational control than we did,” he added.
OpenAI has 3 redlines, but it’s open to changing them as tech evolves
Altman said that OpenAI has “three redlines.” But those redlines could change — and there could be more of them put in place — as the technology evolves, and “new risks” come into play.
“But a really important point: we are not elected. We have a democratic process where we do elect our leaders,” Altman wrote. “We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn’t ethical in the most important areas.”
“Seems fine for us to decide how ChatGPT should respond to a controversial question,” he added. “But I really don’t want us to decide what to do if a nuke is coming towards the US.”
Altman says Anthropic is on a ‘dangerous’ path
Altman said OpenAI had been talking to the Department of War for “many months” about non-classified work, before “things shifted into high gear on the classified side.”
“We found the DoW to be flexible on what we needed, and we want to support them in their very important mission,” he wrote.
“I think the current path things are on is dangerous for Anthropic, healthy competition, and the US,” Altman wrote on X as well. “We negotiated to make sure similar terms would be offered to all other AI labs.”
He also asked for “some empathy” for the Department of War, given its “extremely important mission.”
And, in Altman’s words:
Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.”
And then we say
“But we won’t help you, and we think you are kind of evil.”
I don’t think I’d react great in that situation.
I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.
Altman says AI can help counter big security threats on two fronts
Altman says AI could come in useful on two fronts. Firstly, the US’s “ability to defend against major cyber attacks,” particularly, an attack that might take down the country’s electrical grid.
Secondly, biosecurity is an area where AI could help.
“I do not think we are currently set up well enough to detect and respond to a novel pandemic threat,” Altman said.
Sure, ChatGPT could help a board member write up a memo ahead of a meeting. But OpenAI’s chairman says there’s value to going old-school.
Bret Taylor, OpenAI’s board chair, said in a recent appearance on the “Uncapped with Jack Altman” podcast that he prefers concise but detailed written documents from board members over slide presentations. And he doesn’t want them relying on AI.
“I really like written documents for boards over presentations,” Taylor said. “You end up letting people synthesize information ahead of the board meeting, so you end up with more substantive discussions in the board room.”
Taylor, the former co-CEO of Salesforce and cofounder of AI startup Sierra, said that writing without AI is a worthwhile thinking exercise and helps board members clarify their thoughts.
His expectation for the boards he runs is that members have read the written material ahead of time, which helps keep things focused and substantive during the actual meeting.
“The main thing is it’s been read — and it’s been read ahead of time,” he said. “You end up with a meeting about the actual meat and potatoes of the topics, and you’re not staring at a bunch of sales numbers for the first time.”
Amazon cofounder Jeff Bezos is famously a big fan of meetings focused on a single memo prepared ahead time, but while Bezos preferred dense, 6-page memos, Taylor specifically favors concise material, arguing that brevity is a sign of careful thought — and respect to stakeholders.
“It’s like what’s that famous line — if I had more time, I would have written a shorter letter,” he added. “Like, spend the time because that’s actually how you can show respect to your stakeholders that you’re thinking about the strategic issues going on in your business.”
And while Taylor might not be a fan of leaning on AI for board meeting prep, that doesn’t mean he is dismissing the technology’s potential to be valuable in high-stakes situations.
“If you want a hot take, I think my intuition is regulators will start asking for agents,” he said. “The idea that you have a human set of controls over a regulated process will start to feel like a risk, rather than the risk being AI.”
It’s been said that the way to one’s heart is through their stomach. It sounds like Meta CEO Mark Zuckerberg wanted to see if the AI talent war, or at least one skirmish, could be won the same way.
Mark Chen, chief research officer at OpenAI, recently said that Zuckerberg personally delivered homemade soup to an OpenAI employee as part of a campaign to recruit the unnamed worker to Meta.
“It’s been kind of interesting and fun to see it escalate over time. You know, some interesting stories here are Zuck actually went and hand-delivered soup to people that he was trying to recruit from us,” Chen told Ashlee Vance on the author’s “Core Memory” podcast.
Chen said Zuckerberg’s move was “shocking to me at the time” but since then, he said he’s returned the favor.
“I’ve also delivered soup to people we’ve been recruiting from Meta,” Chen said, laughing.
The poaching efforts focused on OpenAI’s researchers and engineers underscores the company’s position in the AI race, Chen said.
“We’re always under attack,” Chen told Vance. “This is how I know we’re in the lead, right? Any company starts, where do they try to recruit from? It’s OpenAI. They want the expertise, they want our vision, our philosophy of the world. And we’ve made so many star researchers, right? I think OpenAI, more than anywhere else, has been a place that makes names in AI today.”
Arguably, no other rival tech company has been as aggressive in the so-called AI talent wars against OpenAI as Zuckerberg’s Meta.
In June, OpenAI CEO Sam Altman said that Meta tried to lure some of his engineers with $100 million signing bonuses. The CEO said at the time that none of his top talent was poached, but ChatGPT co-creator Shengjia Zhao later joined Meta’s Superintelligence Lab.
Chen said that Meta tried to recruit “half” of Chen’s direct reports unsuccessfully, but that OpenAI has been “fairly good” at retaining top talent. A Meta spokesperson declined to comment.
Top AI researchers have become a hot commodity in the AI race, as it’s generally believed that there is a relatively small number of researchers and engineers capable of achieving breakthroughs or building new LLMs from the ground up.
“It’s like looking for LeBron James,” Databricks’ vice president of AI, Naveen Rao, told The Verge’s Command Line newsletter last year. “There are just not very many humans who are capable of that.”