The-chart-that-explains-OpenAIs-ChatGPT-ads-push.jpeg

The chart that explains OpenAI’s ChatGPT ads push

Sam Altman and OpenAI are getting serious.

They’re dropping sidequests like Sora, and trying to catch up to rival Anthropic, which has a booming business selling tools to coders.

But OpenAI still wants to make money from people who will never give it a dime. It wants to do that by showing them ads.

And this chart from analysts at MoffettNathanson explains why:


chart describing ChatGPT's user base, divided between free/low fee users, who may see ads, and paid users who will not see ads

ChatGPT has lots of users, but only a sliver of them are paying for the service. The rest could see ads. 

MoffettNathanson



It’s a simple argument, but I’ll spell it out here: As of January, OpenAI’s ChatGPT had some 900 million users. But the vast majority of them — 850 million — pay very little or nothing at all to use the services.* So OpenAI wants to turn those zero-to-little revenue-generating users into reliable revenue generators by showing them ads.

That’s it. That’s the post.

But, since you are still here: While OpenAI says its barely hatched ad program is already generating results — the company has said it’s on track to generate $100 million a year in revenue, just two months into its ad launch — it still has a very long way to go.

The company is just beginning to build out the team and tech it will need to run a truly meaningful ad business — it just hired a top Meta exec to run sales — and for quite some time, ChatGPT ads are likely to be something ad buyers experiment with, but don’t rely on. Analyst Michael Nathanson says that while the company has been looking to charge advertisers $60 for every 1,000 impressions, it has been settling for something closer to $15 per 1,000 as it gets up and running.

*ChatGPT Go, priced at $8 a month, is OpenAI’s cheapest paid service, and subscribers to that tier will see ads, along with free users.




Source link

Palantirs-tech-head-explains-how-he-manages-stars-—-and.jpeg

Palantir’s tech head explains how he manages stars — and how he owned a big screwup to the CEO

Palantir’s chief technology officer uses a “Superman” analogy to help manage the company’s brightest talent.

On an episode of the “Invest Like The Best” podcast released on Tuesday, Shyam Sankar shared how he helps employees identify which skills to embrace and which to avoid.

“Superpowers are effortless,” he said. “My analogy for this is it Superman could fly. He could see through walls. But that wasn’t some sort of arduous thing for him to do. It’s just something he could do.”

The Palantir CTO, who has been with the defense tech giant for 20 years, added that the other side of this is identifying your “kryptonite” — in the series, a mineral fatal to Superman.

“It’s not like something you can work on. The only strategy for Superman around kryptonite was to avoid it,” Sankar said.

He added that the company supports employees in uncovering these weaknesses.

“The discovery of kryptonite usually involves you being exposed to it,” he said. “You don’t want to create a culture which is like, you fuck this up, I gotta fire you.”

Palantir culture

On the podcast, Sankar shared that he once made a big mistake, which he took to the company’s CEO, Alex Karp.

“I sheepishly went into Alex and was just completely honest,” he said. “He was also in pain as he internalized what this was going to mean. But he valued the fact that I wouldn’t try to hide it.”

Sankar added that the episode taught him that it was important to have an environment that allows mistakes.

Palantir is known across tech for its anti-hierarchical, untraditional company culture.

According to staffers on the company’s YouTube videos, Palantir is split into micro-teams, and employees report to their teammates. One hiring manager said that, for a project to which a Big Tech company would assign 30 engineers, Palantir only assigns three to four.

The company’s leadership has also embraced ditching diplomas in favor of real-world learning.

On an August earnings call, Karp, who holds a law degree from Stanford and a doctorate from Germany’s Goethe University, said “no one cares” about educational backgrounds at the company.




Source link

A-Google-engineer-whose-job-is-changing-due-to-AI.jpeg

A Google engineer whose job is changing due to AI explains how she’s learning without burning out

This as-told-to essay is based on a conversation with Pratiksha Patnaik, a 30-year-old cloud infrastructure engineer at Google Cloud Consulting, based in Seattle. Her identity and employment have been verified by Business Insider. The following has been edited for length and clarity.

I’ve been with Google for around three years and I started as an infrastructure engineer. I’m still an infrastructure engineer and on a day-to-day basis, I work with customers to build different solutions, depending on the needs of the customer.

At first, I was mostly involved with networking security and infrastructure customers. But as we saw the AI wave come in, we started focusing more on customers that want to adopt gen AI products and solutions.

I didn’t transition into an AI role, but I’m working with a lot of AI services, and AI engineers who are working on features for those services. My job is a combination of working with customers and the product team, to provide technical solutions for customers. It’s a constant feedback loop to figure out if the solution we’re building is right for the customer we’re working for.

Our job is to know how these products work. Sometimes when we work on the products, we identify feature gaps or bugs, so we need to work with the product team or engineering team.

I’ve been in the same role the whole time, but the nature of my job is changing because of everything going on in the AI space. We get a lot of demand in AI products and we have to do a lot of trainings on it to deliver.

I spend an hour or two weekly on trainings

The more AI progresses, the more difficult it’s become to keep up. As the rate of AI innovation gets quicker, the role of engineers has transitioned from mastery to continuous adaptation at scale.

Just being aware of everything that’s happening in the tech industry, along with what we have to do with the customer, has changed dramatically from what it was like a year ago. Back then, we had to execute within known constraints. But as time passes and AI evolves rapidly, those perimeters have dissolved and we have to invest much more time to learning about changes in this space. We now have to navigate an ever-expanding problem space alongside our customers.

I spend around one to two hours a week up-skilling on new concepts. We have a lot of internal trainings that we can utilize. So I see if there is something new that I’m interested in learning about and that can help me do my job.

I am gaining a deeper level of understanding in high-performance computing, AI observability, model performance benchmarking, and the underlying architecture of GPUs and TPUs.

It can get overwhelming

The culture at Google is very much about constantly learning. Every day we learn about a new tool or model version. That motivates me to keep learning. We also have to skill up in order to put our best foot forward in front of the customers.

But with the pace of technology nowadays, I feel like I need to know everything — and if I don’t learn, I might be left behind.

The reality is that it’s not practically possible to know everything with the changes that are coming out at an exponential rate. To remain effective without burning out, I prioritize intentional depth over exhaustive consumption. By focusing on what really interests me, I can make sure that my learning is not just a chore of “keeping up,” but an investment in expertise.

When I read too much, I get overwhelmed and it’s not possible to retain all of the information I’m consuming. We’re at a point where the amount of information we have is huge and we have to figure out where to spend our time and what’s the most beneficial for us.

Are you an engineer experiencing changes in your job? We’d love to hear from you. Email the reporter from a non-work device via email at aaltchek@insider.com or secure-messaging platform Signal at aalt.19.




Source link

photo-of-Ana-Altchek

Microsoft manager explains how she pivoted from admin to AI — and doesn’t regret her English degree

This as-told-to essay is based on a conversation with Brit Morenus, a 37-year-old senior AI gamification program manager, based in Charlotte, North Carolina. Her identity and employment have been verified by Business Insider. The following has been edited for length and clarity.

I’ve been at Microsoft for a total of 13 years, but for five and a half, I was a contract worker.

I graduated from college with a degree focused on English, communications, and marketing. I first landed a job at Microsoft as a contract executive assistant. I stayed in that role for about eight months, then joined the marketing team.

Eventually, I had the opportunity to take a really special position, but it required knowing gamification. Gamification is about integrating game mechanics and motivators, such as storytelling and reward systems, into learning. So I was going to teach people about our products and sell them in a gamified way.

I spent about a year getting certifications that taught me about gamification. I upskilled and learned how to create games, what game mechanics are, and what motivates someone when they’re learning.

That was the position where I was able to prove my impact, and they decided to bring me on full-time. I stayed in that role for another six years, training the frontline and customer service support to develop the right sales skills.

Eventually, I had the opportunity to start gamifying learning about AI. They wanted someone with gamification skills, and my certifications and experience made me the ideal candidate.

I didn’t know much about AI yet, aside from using it for personal reasons, but transitioning to an AI role was actually faster than pivoting to gamification. Since I held the gamification role for about six years, I became really good at it. It only took about three months for me to upskill in AI.

In my first three months on the team, I made myself knowledgeable about AI to the point where I could teach others about it. That’s when I got a certification in Azure AI Fundamentals. It was a certification specific to how Microsoft’s AI works.

I helped my entire team get it, and then I helped my entire organization start working on it. Then I helped the greater customer service support organization work toward getting it as well.

Get outside your comfort zone

My advice to those who want to transition would be: Don’t let fear keep you from stepping outside your comfort zone. There’s so much ambiguity about changing roles or companies, but there’s no time like the present.

With AI specifically, you just need to learn. Everyone already uses it, but you need to understand how it works, because that’s how you can understand what to do with it.

It’s also important to upskill yourself. You have to be willing to constantly move and learn more, because it’s going to keep changing — and faster than you can grasp it. Sometimes AI makes wrong predictions, but it is using words to make that prediction. So I absolutely need to use my English degree in order to figure out keywords and how to prompt it to do the right thing.

I don’t regret my English degree

Up until this Al role, I always joked that I wasn’t using my English degree. But now I use it everywhere, and it truly does help. It helps with things like talking to executives and also with the role itself.

It’s important to know the language of AI and how it operates. So now, more than ever, I am using every bit of my English degree and understanding English, grammar, and how it all functions.

For example, there’s a tagging process that happens behind the scenes with AI, just like on social media. Looking at an image, it might tag it as a woman, or a supermarket, and that gives it a confidence score and tells you if it’s relevant or not, and if it’s what we’re looking for.

A lot of it is more about understanding how to apply the English language than about AI — so, thanks, Mom and Dad, I am using the degree you paid for.

This is part of an ongoing series about workers who transitioned into AI roles. Did you pivot to AI? We want to hear from you. Reach out to the reporter via email at aaltchek@insider.com or secure-messaging platform Signal at aalt.19.




Source link

A-Google-VP-explains-why-ads-make-sense-in-AI.jpeg

A Google VP explains why ads make sense in AI search but not Gemini — yet

Marketers champing at the bit for AI chatbots to become the next major ad surface may have to suppress their appetites a little longer.

With Google’s Gemini surging in popularity, speculation has been bubbling in the ad industry that the app might be on the cusp of introducing ads to capitalize on the moment — and help offset the hefty AI infrastructure costs.

Not so, according to Google’s VP of global ads, Dan Taylor. In an interview with Business Insider this week, Taylor reaffirmed there are “no plans for ads in the Gemini app.”

Instead, the ads team is prioritizing ad placements within AI search. Google began introducing ads to AI Overviews — the natural language summaries of search results that appear at the top of its search engine results page — in 2024. Last year, it brought ads to AI Mode, its AI chatbot that appears on search pages, which enables users to conduct more in-depth research and ask follow-up questions.

“Search and Gemini are complementary tools with different roles,” Taylor said.

“While they both use AI, search is where you go for information on the web, and Gemini is your AI assistant,” he said. “Search is helping you discover new information, which can include commercial interests like new products or services. We see Gemini as helping you create, analyze, and complete that.”

From an advertising perspective, Google has over 25 years of experience with search ads. Monetizing AI assistants is a relatively new, uncharted territory with numerous questions to consider.

Here are a few:

  • Where and when should an ad show up?
  • What would these ads look like, and how should companies think about charging for them?
  • How can an AI chatbot balance commercial interests while also ensuring users feel they are getting accurate and objective answers?
  • Could the introduction of ads alienate users in a competitive landscape where apps like Gemini are fighting for supremacy against the likes of OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude?

A first-mover disadvantage?

Ads might feel inevitable as tech giants invest billions of dollars into their AI infrastructures. However, AI companies are aware that making the first move could be perceived as a degradation of their products and cause users to jump ship.

Google’s success in leveraging AI to create financial gains from its existing search product and advertising platform is one advantage it has over arch-rival OpenAI, which is under pressure to demonstrate a path to profitability. It potentially gives Google more leeway to wait before introducing an ad model to Gemini.

Stratechery tech analyst Ben Thompson said in a recent interview on the tech news show TBPN that OpenAI delaying ads in ChatGPT “risks the entire company.”

“They could have launched the world’s crappiest ads in 2023. By today, in 2026, they would be good,” Thompson said. “Now, they’re going to have to launch ads, they’re going to suck, and people are going to be like, ‘This sucks, I’ll just go to Gemini.'”

The rivalry between Google and OpenAI intensified late last year when Google released its Gemini 3 AI model, which received rave reviews. OpenAI CEO Sam Altman responded by issuing a “code red,” telling teams to redirect resources from newer projects, including a yet-to-be-released advertising program, to prioritize improving ChatGPT’s performance.

Gemini had 650 million monthly active users, Alphabet, Google’s parent company, said in its latest quarterly earnings report in October. OpenAI said in October that ChatGPT had 800 million weekly users.

What Google has learned from ads in AI search so far

Taylor said that more than 80% of Google’s advertisers are currently using some form of AI-powered search functionality. That’s largely through the adoption of tools like AI Max for Search and Performance Max, where Google’s AI algorithm automatically chooses which ad creatives a campaign should run and where to place those ads.

Advertisers can’t yet specifically choose to run ads within AI Mode or AI Overviews. Instead, the algorithm makes the decision to place them there based on targeting variables like location, demographics, keywords, and topics.

“We don’t have any plans to enable buying separately at this phase,” Taylor said.

Taylor said AI Overviews have notched up more than 2 billion monthly active users, and that people are clicking and engaging with AI Overview ads “at about the same rate” as traditional search ads.

Google’s testing of ads in AI Mode isn’t as far along and presents more challenges when trying to convert the traditional search ads playbook for the AI era. Users have longer back-and-forth conversations in AI Mode, and ads shown too early can feel “intrusive” and create “a trust problem,” Taylor said. A newbie runner seeking helpful information about how to prepare their body for a marathon later in the year might not be ready straight away for ads featuring performance running shoes, for example, he added.

This month, Google said it had begun testing a new ad format called Direct Offers, which will let advertisers present personalized discounts to shoppers who are about to make a purchase within AI Mode. Taylor said Google is only working with a specific set of advertisers on the Direct Offers pilot and didn’t have more information about when it might become broadly available.

Direct Offers was one of several announcements Google made regarding new AI-shopping experiences. New products included a forthcoming checkout function that will let shoppers complete their purchases inside AI Mode and the Gemini app.




Source link

cherylt_headshot

Critical Role’s chief creative officer, Matt Mercer, explains how he avoids burnout

Critical Role’s chief creative officer, Matthew Mercer, had been spearheading his eight-member crew’s relentless push into the big leagues of nerdworld for 10 years.

That was until this July, when he announced that he’d be giving up control of one of the crew’s biggest priorities, their long-running “Dungeons & Dragons” Twitch livestream.

In an August appearance on the podcast “Crispy’s Tavern: Tales and Tea,” Mercer said he’d felt the threat of burnout and thought he needed a break. He said he’d started to feel a “continuous need to produce creatively,” which was “a very draining and very scary thing.”

To be sure, Mercer and his seven cofounders still have a full slate of projects to work on. That includes an ongoing sold-out arena tour, as well as two Amazon-backed animated series on Prime Video. Mercer also has a key role in the team’s game publishing arm, Darrington Press, home to “Daggerheart,” their flagship game and their answer to “D&D.”

Still, Mercer says, it’s important to be able to admit when you’re done, and to give yourself permission to step away from the work for as long as you need to.

“My biggest advice for burnout is to acknowledge when you’re at the edge and take every opportunity you can to step away and replenish your cup,” Mercer told Business Insider.

Brennan Lee Mulligan of “Dimension 20” fame, Mercer’s longtime friend and collaborator, is the game master for Campaign Four, the team’s ongoing “D&D” stream. Mulligan taking over the main stream means Mercer is no longer solely in charge of captaining the team’s regular episodes, which often run to the four-hour mark.

“There’s this concept, the idea that just pushing through and sometimes necessity requires you to do that to a certain point,” Mercer said.

“But I find walking away and taking some time to enrich your creative input means that whatever time you lost beating your head against the wall will be more than made up for when you can return from a place of genuine inspiration and renewal,” Mercer added.

Campaign Four airs on Beacon, Critical Role’s in-house streaming platform, as well as on Twitch and YouTube.




Source link

Americans-are-living-in-a-career-industrial-complex-Venture-capitalist.jpeg

Americans are living in a ‘career industrial complex.’ Venture capitalist Bill Gurley explains how to break out and find your dream job.

A top Silicon Valley investor has an antidote for “quiet quitting.”

Bill Gurley is a general partner at venture capitalist firm Benchmark and the author of “Runnin’ Down a Dream, How to Thrive in a Career You Actually Love.” Gurley told Neal Freyman and Toby Howell on the “Morning Brew Daily” podcast that aired on Sunday that it is “horrific” how some people are actively disengaged at work, but the heart of the matter is that people “aren’t ending up in the right place.”

“We developed this mindset where you push kids toward economic safety — doctors, lawyers, jobs where unemployment is low, and salaries are high,” said Gurley. “But we’ve pushed a lot of kids into what I call the ‘career industrial complex.'”

Gurley said that the “career industrial complex” means pushing children toward a “résumé arms race” of standardization and credential accumulation, rather than encouraging curiosity and exploration.

A simple test as to whether you would be successful in your dream job, said Gurley, is whether you would be willing to learn on your own time.

“I like to say, you know, if you have three episodes of Breaking Bad left, would you study this instead?” said Gurley. “Like, does it compete with what you do in your free time?”

Gurley added that he once did a survey where he asked 10,000 people if they would choose a different career if given the chance to go back in time, and 60% said yes.

Gurley’s comments came as workplace trends such as “job hugging” and “quiet cracking” emerged in 2025.

While workers feared layoffs and the prospects of landing new roles dimmed for many young professionals.

A Gallup poll done in 2024 found that employee engagement in the US fell to its lowest level in a decade, with only 31% of employees feeling engaged. Additionally, workers under the age of 35 are less engaged compared to other age groups.




Source link

A-Nobel-Prize-winning-physicist-explains-how-to-use-AI-without.jpeg

A Nobel Prize-winning physicist explains how to use AI without letting it replace your thinking

Think AI makes you smarter?

Probably not, according to Saul Perlmutter, a Nobel Prize-winning physicist who was credited for discovering that the universe’s expansion is accelerating.

He said AI’s biggest danger is psychological: it can give people the illusion they understand something when they don’t, weakening judgment just as the technology becomes more embedded in our daily work and learning.

“The tricky thing about AI is that it can give the impression that you’ve actually learned the basics before you really have,” Perlmutter said on a podcast episode with Nicolai Tangen, CEO of Norges Bank Investment Group, on Wednesday.

“There’s a little danger that students may find themselves just relying on it a little bit too soon before they know how to do the intellectual work themselves,” he added.

Rather than rejecting AI outright, Perlmutter said the answer is to treat it as a tool — one that supports thinking instead of doing it for you.

Use AI as a tool — not a substitute

Perlmutter said that AI can be powerful — but only if users already know how to think critically.

“The positive is that when you know all these different tools and approaches to how to think about a problem, AI can often help you find the bit of information that you need,” he said.

At UC Berkeley, where Perlmutter teaches, he and his colleagues developed a critical-thinking course centered on scientific reasoning, including probabilistic thinking, error-checking, skepticism, and structured disagreement, taught through games, exercises, and discussion designed to make those habits automatic in everyday decisions.

“I’m asking the students to think very hard about how would you use AI to make it easier to actually operationalize this concept — to really use it in your day-to-day life,” he said.

The confidence problem

One of Perlmutter’s concerns is that AI often speaks with far more certainty than it deserves and can be “overly confident” in what it says.

The challenge, Perlmutter said, is that AI’s confident tone can short-circuit skepticism, making people more likely to accept its answers at face value rather than question whether they’re correct.

That confidence, he said, mirrors one of the most dangerous human cognitive biases: trusting information that appears authoritative or confirms our existing beliefs.

To counter that instinct, Perlmutter said people should evaluate AI outputs the same way they would any human claim — weighing credibility, uncertainty, and the possibility of error rather than accepting answers at face value.

Learning to catch when you’re being fooled

In science, Perlmutter said, researchers assume they are making mistakes and build systems to catch them. For example, scientists hide their results from themselves, he said, until they’ve exhaustively checked for errors, thereby reducing confirmation bias.

The same mindset applies to AI, he added.

“Many of [these concepts] are just tools for thinking about where are we getting fooled,” he said. “We can be fooling ourselves, the AI could be fooling itself, and then could fool us.”

That’s why AI literacy also involves knowing when not to trust the output, he said — and being comfortable with uncertainty, rather than treating AI outputs as absolute truth.

Still, Perlmutter is clear that this isn’t a problem with a permanent solution.

“AI will be changing,” he said, “and we’ll have to keep asking ourselves: is it helping us, or are we getting fooled more often? Are we letting ourselves get fooled?”




Source link