This story is available exclusively to Business Insider
subscribers. Become an Insider
and start reading now. Have an account? .
Google’s $32 billion Wiz deal has officially closed.
The search giant said last year it would buy the cybersecurity firm to bolster its cloud business.
It’s Google’s biggest-ever acquisition.
Google’s $32 billion acquisition of cybersecurity firm Wiz has officially closed.
The search giant announced on Wednesday that Wiz will join Google Cloud at a moment when AI is making cloud security more vital. Wiz offers a platform that helps customers protect data across different cloud services.
“In today’s AI era, more businesses and governments are migrating their most important data and systems to the cloud and turning to agile and continuous software development,” Google wrote in a post announcing the news.
“As these organizations operate in a multicloud environment and adopt AI, attackers are using AI to operate with greater speed and sophistication,” the company added.
Google announced last year that it intended to buy Wiz for $32 billion in the search giant’s biggest-ever acquisition.
The deal, which was also viewed as a test for President Donald Trump’s antitrust agenda, could be a boon for Google’s cloud business as it pursues customers for its AI products.
Google said Wiz would remain a multicloud offering after the acquisition, meaning it will continue to be made available through rival cloud providers such as Amazon and Microsoft.
“Our mission remains as bold as ever: to protect everything organizations build and run,” Assaf Rappaport, the CEO of Wiz, said in a blog post. “And we are still just getting started.”
Throughout university, Google was always my dream job. I watched “The Internship” and dreamt of the day that I would get to work there.
Eventually, after a few years, I made my way in and landed a role at Google as a global product lead. Prior to that, I was at Meta as the business operations and planning lead for North America. Now I’ve left Big Tech to build multiple businesses, including one that is doing seven figures a year, and invest in over 20 companies.
None of that was a straight line. The first time I interviewed at Google, I was one of three finalists after a 12-month process, but I walked out knowing I had lost the moment I opened my mouth.
I bombed the final round so badly that it took three years before I interviewed there again. Here’s what happened, and the four lessons I’ve carried into every interview and career decision since.
I interviewed for a sales account manager role on Google’s ad team
I had never interviewed at a Big Tech company before, and the process was unlike anything I’d experienced.
I spent months preparing. I did hundreds of practice interviews. I was still so painfully nervous I could feel my hands shaking in the waiting room.
Somehow, I made it to the final interview. During the interview, the interviewer asked the most deceptively simple question imaginable: “What do you do for fun outside of work?”
I froze.
Here’s the truth: I was a nerd. A genuine, unashamed nerd who spent his evenings building websites, obsessively testing productivity tools, and writing about everything he learned. I had started a tiny tech newsletter I shared with a handful of friends (and my mom).
I had this image in my head of what a “Google person” looked like. Cool hobbies. Cool parties. Cool music. I was convinced that if I told her who I really was, she’d disqualify me on the spot.
So I lied.
I tried to change my personality for the job, and it didn’t work
I told the interviewer, “I go to a lot of parties and music festivals and watch a lot of TV.”
The color drained from her eyes. I could feel it happening in real time. She followed up: “What kind of music festivals? What was the best show you watched?”
I doubled down.
“I like Drake and Taylor Swift. And I’ve basically watched every show on Netflix. Literally every single one.”
I never got a callback. She thanked me for my time, said they decided to go with another candidate, and that was that. It took three years before Google interviewed me again.
What I’ve learned since
A year after that failure, I interviewed at another tech company — a better role, higher pay, and in my dream city. This time, I told them everything.
I talked about the tech newsletter I was building. I walked them through the websites I’d built for fun. I rambled about my obsession with productivity software in a way that — in hindsight — must have seemed slightly unhinged.
I got the job. Honestly, it changed my life.
Those two experiences taught me four things I now share with every early-career professional I coach.
The lessons Yeung learned earlier in his career have helped him become a successful entrepreneur.
Courtesy of Andrew Yeung
1. The version of yourself you perform in an interview has to survive the job
Here’s the practical problem with lying in an interview: if it works, you’ve created a prison for yourself.
If I had gotten that Google role by pretending to love music festivals and Taylor Swift, I would’ve had to sustain that fiction with a manager I saw every single day. The relationship starts on a false foundation. The version of you that got hired isn’t the version that shows up on Monday morning.
When you’re authentic in an interview, you’re not just trying to impress them — you’re also evaluating whether this is a place where the real version of you can actually thrive. That calculus matters.
2. Generic answers are a death sentence
“I like Drake and Taylor Swift” is the résumé equivalent of “I’m a hard worker who loves a challenge.” It says nothing. It connects with no one. It helps no one.
“I run a tech newsletter about productivity tools — mostly friends, and my mom read it” is memorable, specific, and real. Even if the hiring manager has zero interest in productivity software, they now have a picture of who you are.
The goal isn’t to guarantee they’ll love your hobbies. The goal is to give them something real to react to.
3. Your niche obsessions are your competitive advantage
At the time, I was embarrassed by my interests. I thought they made me less hireable.
The opposite turned out to be true.
My obsession with building things, writing about what I learned, and exploring new tools were exactly the signals a company like Google was looking for. Someone who builds things in their spare time because they simply can’t help it.
Ask yourself: What are the interests you’re most tempted to hide in an interview? Chances are, those are exactly the ones that make you most distinctive.
4. Culture fit is a two-way interview
After failing that Google interview, I was devastated. I spent months replaying every answer.
What I understand now is that culture fit isn’t just something that happens to you — it’s something you also get to assess. A company that would pass on me for being a nerd who builds websites and runs newsletters was probably not a company where I would have thrived.
The right fit wants the real you. If it doesn’t, you’ve learned something valuable — for free, before you’re three years into the wrong job.
Andrew Yeung is a former Meta and Google employee who now throws tech parties through Andrew’s Mixers, runs a tech events company called Fibe, and invests at Next Wave NYC.
This as-told-to essay is based on a conversation with Pratiksha Patnaik, a 30-year-old cloud infrastructure engineer at Google Cloud Consulting, based in Seattle. Her identity and employment have been verified by Business Insider. The following has been edited for length and clarity.
I’ve been with Google for around three years and I started as an infrastructure engineer. I’m still an infrastructure engineer and on a day-to-day basis, I work with customers to build different solutions, depending on the needs of the customer.
At first, I was mostly involved with networking security and infrastructure customers. But as we saw the AI wave come in, we started focusing more on customers that want to adopt gen AI products and solutions.
I didn’t transition into an AI role, but I’m working with a lot of AI services, and AI engineers who are working on features for those services. My job is a combination of working with customers and the product team, to provide technical solutions for customers. It’s a constant feedback loop to figure out if the solution we’re building is right for the customer we’re working for.
Our job is to know how these products work. Sometimes when we work on the products, we identify feature gaps or bugs, so we need to work with the product team or engineering team.
I’ve been in the same role the whole time, but the nature of my job is changing because of everything going on in the AI space. We get a lot of demand in AI products and we have to do a lot of trainings on it to deliver.
I spend an hour or two weekly on trainings
The more AI progresses, the more difficult it’s become to keep up. As the rate of AI innovation gets quicker, the role of engineers has transitioned from mastery to continuous adaptation at scale.
Just being aware of everything that’s happening in the tech industry, along with what we have to do with the customer, has changed dramatically from what it was like a year ago. Back then, we had to execute within known constraints. But as time passes and AI evolves rapidly, those perimeters have dissolved and we have to invest much more time to learning about changes in this space. We now have to navigate an ever-expanding problem space alongside our customers.
I spend around one to two hours a week up-skilling on new concepts. We have a lot of internal trainings that we can utilize. So I see if there is something new that I’m interested in learning about and that can help me do my job.
I am gaining a deeper level of understanding in high-performance computing, AI observability, model performance benchmarking, and the underlying architecture of GPUs and TPUs.
It can get overwhelming
The culture at Google is very much about constantly learning. Every day we learn about a new tool or model version. That motivates me to keep learning. We also have to skill up in order to put our best foot forward in front of the customers.
But with the pace of technology nowadays, I feel like I need to know everything — and if I don’t learn, I might be left behind.
The reality is that it’s not practically possible to know everything with the changes that are coming out at an exponential rate. To remain effective without burning out, I prioritize intentional depth over exhaustive consumption. By focusing on what really interests me, I can make sure that my learning is not just a chore of “keeping up,” but an investment in expertise.
When I read too much, I get overwhelmed and it’s not possible to retain all of the information I’m consuming. We’re at a point where the amount of information we have is huge and we have to figure out where to spend our time and what’s the most beneficial for us.
Are you an engineer experiencing changes in your job? We’d love to hear from you. Email the reporter from a non-work device via email at aaltchek@insider.com or secure-messaging platform Signal at aalt.19.
Google executive Yasmeen Ahmad is looking for something specific when hiring engineers — and it’s not just technical know-how.
Ahmad told Business Insider that the typical software engineering interview used to focus on detailed coding tests and test suites. Now, as she hires for a forward-deployed engineering team, which will work with customers, she said she’s prioritizing people with fresh ideas.
The strongest candidates are “able to think outside the box,” Ahmad, director of Google Cloud’s data cloud, said. “They’re able to think outside the frame of how we would have normally described a problem.”
The executive added that candidates who take a traditional approach to engineering aren’t performing as well in her team’s interviews. The ideal candidate nowadays, she said, can demonstrate creative problem-solving by using AI to reimagine traditional processes. She said she evaluates that type of thinking in two ways:
1. Constant experimentation
Ahmad said she looks for candidates who are constantly “tinkering” with new tools. That gives her an immediate signal that they’re creative thinkers.
“When you’re interviewing them, they’re naturally immediately talking about, ‘oh, last week I had tried AI in this context, and this is how it made me better at doing my job in this way,'” Ahmad said.
These candidates aren’t trying new tools because their boss told them to or because it’s the new cool thing to try,she said.
“They’re the early adopters,” Ahmad said.
Tech executives have told Business Insider that side projects are becoming increasingly common for candidates to demonstrate their aptitude in interviews. However, Ahmad said candidates don’t need to have a GitHub repository of projects they’ve worked on in their spare time.
“It doesn’t have to be pet side projects, because people are busy,” Ahmad said, adding that workers can experiment on the job by trying out new ways to speed up their work.
2. Scenario testing
AI is being used more often throughout the interview process — in some cases, illicitly by job seekers, and in others, as a way for employers to test candidates’ AI capabilities. As these tools reshape hiring, Ahmad said scenario-based testing has become central component to the interview process, giving hiring managers a better way to assess creativity.
Ahmad said she’ll ask candidates how they would approach a scenario involving AI tools in an industry where they have no domain knowledge.
For example, if the example related to healthcare, a traditional candidate might say that they would take all the patients’ unstructured PDFs, feed them into a single LLM prompt, and ask it to generate a summary for the doctor. That would be a “massive liability,” Ahmad said, because in that scenario the candidate assumes AI can inherently understand the timeline of events or clinical context of an image by looking at it.
Ahmad said she’s looking for a candidate who can “find solutions in a way that breaks the chains of how that workflow process has traditionally gone.” So someone might suggest building the semantic context for the imaging data before the model sees it. Next, they would build a specific framework to ensure the agent is operating in the right time frame of data. Then, they would recommend designing a multi-step process that includes a continuous evaluation loop.
“We aren’t just hiring people to write prompts,” Ahmad said. “We are hiring people who can foresee how a model might silently fail in a high-stakes environment, and who know how to build the automated evaluation loops to catch it before it does.”
She said asking these sorts of questions to vet creativity is especially useful as AI transforms the software engineering industry by automating core parts of the job.
“We’re seeing the human role is evolving to more of an orchestrated role,” Ahmad said. “So rather than having to write all of the detailed code, it’s ‘how do I actually express my intent to a multi-agent system now and have that multi-agent system execute on that intent?'”
Google apologized on Tuesday for a news alert about a controversial moment at the British Academy Film Awards that contained the unedited N-word.
“We’re deeply sorry for this mistake,” a Google spokesperson said in a statement. “We’ve removed the offensive notification and are working to prevent this from happening again.”
The now-deleted news alert previewed a story about Sunday’s BAFTA awards, where an attendee with Tourette syndrome shouted the N-word while “Sinners” stars Michael B. Jordan and Delroy Lindo — both of whom are Black — were on stage to present an award.
Deadline.com initially reported that AI was to blame for the racial slur appearing in the push alert. Google said that was not the case, and Deadline has since clarified its report.
Google said it caught the mistake quickly and only a “small subset of users” received the alert with the unedited racial slur. The search giant said that its push alert systems recognized a euphemism for the slur used in stories and incorrectly inserted the full word.
“This system error did not involve AI,” Google said. “Our safety filters did not properly trigger, which is what caused this.”
Tourette syndrome advocate John Davidson, whose life story served as the inspiration for the BAFTA-nominated film “I Swear,” later said in a statement that he was “deeply mortified if anyone considers my involuntary tics to be intentional or to carry any meaning.”
According to the Tourette Association of America, roughly 10% of the millions of people living with Tourette and tic disorders experience coprolalia, which is “the involuntary vocalization of obscene or socially inappropriate words or phrases.”
“Importantly, these vocal tics are not reflective of the beliefs or values of the person experiencing them,” the association said in a statement.
The BBC and the BAFTAs have faced intense criticism for broadcasting the moment, even though the award ceremony was subject to a two-hour tape delay. On Monday, both the BBC and BAFTAs offered separate public apologies for the moment.
“We take full responsibility for putting our guests in a very difficult situation and we apologize to all,” the BAFTAs said in a statement. “We will learn from this, and keep inclusion at the core of all we do, maintaining our belief in film and storytelling as a critical conduit for compassion and empathy.”
Kate Phillips, the BBC’s chief content officer, said in a note to staff that another racial slur was edited out of the broadcast.
“We take full responsibility for what happened,” Phillips wrote on Tuesday in the note, which was provided to Business Insider. “When I was made aware it was audible on iPlayer, I asked for it to be taken down. As I’m sure you’re aware we put out a statement yesterday morning apologising that the remark was not edited out prior to broadcast.”
During the award ceremony, host Alan Cumming made multiple statements about the language the audience might be hearing. Variety reported that “shut the fuck up” among other phrases could also be heard during the show.
“You may have noticed some strong language in the background,” Cumming told the audience. “This can be part of how Tourette’s syndrome shows up for some people as the film explores that experience.”
In the search for Nancy Guthrie, authorities have relied not only on traditional investigative work but also on data trails tied to two of the world’s largest companies.
Google and Walmart have both emerged as significant players in the high-profile investigation, assisting local Arizona law enforcement and the FBI as they work to locate the 84-year-old mother of “Today” show host Savannah Guthrie.
Authorities believe that Nancy Guthrie was abducted from her ranch-style home in the Catalina Foothills, just outside Tucson, AZ, nearly three weeks ago.
A major break in the case came more than a week into the elderly woman’s mysterious disappearance, thanks, in part, to the help of Google.
Initially, authorities said they were unable to retrieve any footage from Nancy Guthrie’s Google-owned Nest doorbell camera because she did not have a subscription to store her video feed.
That changed when investigators, working with “private sector partners,” managed to cover some doorbell footage from “residual data located in backend systems,” FBI Director Kash Patel said in a previous statement on X.
Police believe Nancy Guthrie was taken from her home on February 1.
Rebecca Noble/REUTERS
The footage, released widely to the public by the FBI on February 10, revealed a masked and armed man outside Nancy Guthrie’s home appearing to tamper with the doorbell camera on February 1, the day she vanished.
It took Google engineers several days to recover the footage, CNN has reported, citing a person familiar with the investigation. Google did not respond to Business Insider’s request for comment.
The tech giant is attempting to obtain additional video from Nancy Guthrie’s other home cameras, Pima County Sheriff Chris Nanos told NewsNation in a report published on Wednesday.
“We’ve asked Google, ‘Hey guys, can you do this?’ and they said the very same thing, ‘Sheriff, we don’t think we can get anything, but we’ll try,” Nanos said, adding that investigators remain “hopeful.
Meanwhile, authorities believe the backpack the suspect wore in the doorbell camera footage was a 25-liter “Ozark Trail Hiker” backpack sold exclusively at Walmart.
A spokeswoman for the Pima County Sheriff’s Department said this week that investigators are working with Walmart management to “identify and isolate the individual who purchased the backpack.”
In an interview with CBS News, Nanos described the backpack as “one of the most promising leads” in the case.
The sheriff said investigators have been scouring surveillance footage from local Walmart stores and that the megaretailer has turned over records of all Ozark Trail Hiker backpack purchases from the last several months, the news outlet reported.
A Walmart spokesman declined to comment on the matter.
The FBI released this image of a suspect in the disappearance of Nancy Guthrie.
FBI
So far, the only item that has been “positively identified” on the suspect in the doorbell camera footage is the Ozark Trail Hiker packpack, a Pima County Sheriff’s Department spokeswoman said.
“Investigators are working to determine where the other items may have been purchased,” the spokeswoman said.
Nancy Guthrie’s disappearance has gripped the nation. Her famous daughter, Savannah Guthrie, has issued tearful video messages, pleading for her mother’s safe return.
Authorities have not publicly identified any suspects or persons of interest in the case.
DNA found at Nancy Guthrie’s property is being analyzed by investigators, the sheriff’s department said this week.
Earlier this week, Nanos, the sheriff, said the Guthrie family, including all siblings and spouses, has been cleared as possible suspects in the case.
“The family has been nothing but cooperative and gracious and are victims in this case,” Nanos said.
Bolt Data and Energy, a data center development firm that was cofounded late last year by former Google CEO Eric Schmidt, is negotiating a deal that would allow it to begin construction on a large data center project it is planning in West Texas.
Schmidt’s firm is in discussions with Google, his former employer, according to two people with direct knowledge of the talks. The tech giant, one of the leaders in the race to develop and commercialize artificial intelligence, is considering a commitment of 250 megawatts, according to one of the people. The other person said it was too early to characterize the exact size of the potential transaction because it was still under discussion.
The sources spoke on the condition of anonymity because the potential transaction is still being arranged and the talks are confidential.
“We don’t comment on rumors,” a Google spokeswoman told Business Insider, declining to comment further. Google announced last year that it plans to build $40 billion of cloud and AI infrastructure in Texas by 2027.
The potential deal highlights how Big Tech is racing to secure the power, physical infrastructure, and land needed to fuel AI, even as the costs and financial risks of those bets loom.
Every time Daniel publishes a story, you’ll get an alert straight to your inbox!
Stay connected to Daniel and get more of their work as it publishes.
In December, Bolt completed its first funding round, raising $150 million from investors, including $50 million from Texas Pacific Land Corporation, a public company that owns large tracts of land in West Texas. As part of the investment from TPL, Bolt will develop data centers on land in TPL’s portfolio.
A presentation detailing Bolt’s development plans, shared with Business Insider, said that TPL’s land would give it access to abundant power and water for cooling. These commodities have become increasingly strained as data center development has boomed around the country.
The presentation states that Bolt’s development would begin with an “initial 250 megawatt facility” and expand in 250-500 megawatt increments into a 5 gigawatt campus.
Bolt’s plan is one of several large-scale projects that have been envisioned in Texas to cater to the AI race. Fermi, a public company co-founded by former Texas governor and US Energy Secretary Rick Perry, has plans for an 11-gigawatt campus in Amarillo.
In December, Business Insider revealed that Amazon had pulled back a $150 million cash advance it had pledged as part of a preliminary deal to anchor the project. Fermi’s disclosure of the reimbursement of that advance caused its stock to fall by 50%. Fermi’s CEO, Toby Neugebauer, told Business Insider that although Amazon had reclaimed its advance, the negotiations for it to take space with Fermi were still ongoing.
Major bank lenders who extended $38 billion to finance the construction of data center campuses in Shackleford County, Texas, and Port Washington, Wisconsin, for Oracle and OpenAI, meanwhile, have had difficulty selling off pieces of the huge loan to other banks and investors. Those troubles stem, in part, from worries about whether Oracle’s credit will be strained by its massive AI spending.
To help allay concerns, Oracle announced it would raise as much as $50 billion in debt and equity in 2026 to continue to pursue its AI buildout while also maintaining “a solid investment-grade balance sheet.”
Last week, Alphabet, Google’s parent company, revealed in its fourth-quarter earnings report that it plans to spend between $175 and $185 billion on capital expenditures in 2026, roughly double its outlay in 2025. The spending is being done largely to pay for AI equipment and infrastructure.
A record wave of spending has been announced by big technology companies on AI this year, including Amazon’s disclosure during its earnings last week that it would spend $200 billion alone this year.
Advertising could become a $25 billion business for OpenAI — and pose a threat to Google, according to new estimates on Monday from a top tech analyst.
Evercore ISI’s Mark Mahaney sees the startup generating that level of annual ad revenue by 2030 if it executes well on rolling out this new business.
OpenAI said on Friday that free and Go users of ChatGPT would start seeing ads “in the coming weeks.” OpenAI also laid out its advertising principles, such as clearly labeling them and not sharing user conversations with advertisers.
“A path to generating several billion dollars in ad revenue in 2026, going to $25B+ by 2030, seems reasonable,” Mahaney wrote in a note to investors.
That’s based on the likely scale of ChatGPT by that time, the proven monetization of high-intent performance marketing platforms, and the current size of this market, the analyst added.
OpenAI’s revenue is growing fast already. CFO Sarah Friar said in a recent blog post that the startup’s annualized revenue topped $20 billion in 2025, up from $2 billion in 2023. However, there are big question marks over OpenAI’s losses and whether it can become profitable in the future.
Advertising could be one way for OpenAI to boost its top and bottom lines.
Mahaney noted that Google’s Search and YouTube businesses likely generated close to $300 billion in ad revenue in 2025, with Meta generating an additional $180 billion. These are highly profitable operations, with operating profit margins of 40%, according to the analyst.
ChatGPT has almost 1 billion weekly average users, many of whom share valuable details with the chatbot, such as what they want and need. Advertisers are willing to pay up for access to this treasure trove. This is the type of intent-based information that forms the backbone of the massive digital ad businesses run by Google and Meta.
OpenAI has said that initial test ads will appear at the bottom of ChatGPT answers and be relevant to the user’s conversation with the chatbot. That approach might not be too intrusive for users, while still being attractive to advertisers, Mahaney said.
“OpenAI’s move directly challenges this core revenue stream by offering an alternative, highly engaging platform for users to discover products and services,” Mahaney wrote. “If ChatGPT can successfully integrate ads that are helpful rather than intrusive, it could siphon off valuable commercial queries that traditionally go to Google.”
The analyst also warned that if OpenAI can develop a “conversational” ad format, where users research and discuss potential purchases within ChatGPT, that could prompt advertisers to shift some of their marketing budgets because this is “high-intent engagement.”
Even if ChatGPT goes all-in on ads, though, don’t expect the chatbot to take Google’s share of the market overnight, Mahaney added.
OpenAI will still have to compete with the tech ecosystem that Google has spent years creating, such as its Chrome web browser, as well as web users’ habit of Googling stuff when they need an answer, Mahaney wrote.
Marketers champing at the bit for AI chatbots to become the next major ad surface may have to suppress their appetites a little longer.
With Google’s Gemini surging in popularity, speculation has been bubbling in the ad industry that the app might be on the cusp of introducing ads to capitalize on the moment — and help offset the hefty AI infrastructure costs.
Not so, according to Google’s VP of global ads, Dan Taylor. In an interview with Business Insider this week, Taylor reaffirmed there are “no plans for ads in the Gemini app.”
Instead, the ads team is prioritizing ad placements within AI search. Google began introducing ads to AI Overviews — the natural language summaries of search results that appear at the top of its search engine results page — in 2024. Last year, it brought ads to AI Mode, its AI chatbot that appears on search pages, which enables users to conduct more in-depth research and ask follow-up questions.
“Search and Gemini are complementary tools with different roles,” Taylor said.
“While they both use AI, search is where you go for information on the web, and Gemini is your AI assistant,” he said. “Search is helping you discover new information, which can include commercial interests like new products or services. We see Gemini as helping you create, analyze, and complete that.”
From an advertising perspective, Google has over 25 years of experience with search ads. Monetizing AI assistants is a relatively new, uncharted territory with numerous questions to consider.
Here are a few:
Where and when should an ad show up?
What would these ads look like, and how should companies think about charging for them?
How can an AI chatbot balance commercial interests while also ensuring users feel they are getting accurate and objective answers?
Could the introduction of ads alienate users in a competitive landscape where apps like Gemini are fighting for supremacy against the likes of OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude?
A first-mover disadvantage?
Ads might feel inevitable as tech giants invest billions of dollars into their AI infrastructures. However, AI companies are aware that making the first move could be perceived as a degradation of their products and cause users to jump ship.
Google’s success in leveraging AI to create financial gains from its existing search product and advertising platform is one advantage it has over arch-rival OpenAI, which is under pressure to demonstrate a path to profitability. It potentially gives Google more leeway to wait before introducing an ad model to Gemini.
Stratechery tech analyst Ben Thompson said in a recent interview on the tech news show TBPN that OpenAI delaying ads in ChatGPT “risks the entire company.”
“They could have launched the world’s crappiest ads in 2023. By today, in 2026, they would be good,” Thompson said. “Now, they’re going to have to launch ads, they’re going to suck, and people are going to be like, ‘This sucks, I’ll just go to Gemini.'”
The rivalry between Google and OpenAI intensified late last year when Google released its Gemini 3 AI model, which received rave reviews. OpenAI CEO Sam Altman responded by issuing a “code red,” telling teams to redirect resources from newer projects, including a yet-to-be-released advertising program, to prioritize improving ChatGPT’s performance.
Gemini had 650 million monthly active users, Alphabet, Google’s parent company, said in its latest quarterly earnings report in October. OpenAI said in October that ChatGPT had 800 million weekly users.
What Google has learned from ads in AI search so far
Taylor said that more than 80% of Google’s advertisers are currently using some form of AI-powered search functionality. That’s largely through the adoption of tools like AI Max for Search and Performance Max, where Google’s AI algorithm automatically chooses which ad creatives a campaign should run and where to place those ads.
Advertisers can’t yet specifically choose to run ads within AI Mode or AI Overviews. Instead, the algorithm makes the decision to place them there based on targeting variables like location, demographics, keywords, and topics.
“We don’t have any plans to enable buying separately at this phase,” Taylor said.
Taylor said AI Overviews have notched up more than 2 billion monthly active users, and that people are clicking and engaging with AI Overview ads “at about the same rate” as traditional search ads.
Google’s testing of ads in AI Mode isn’t as far along and presents more challenges when trying to convert the traditional search ads playbook for the AI era. Users have longer back-and-forth conversations in AI Mode, and ads shown too early can feel “intrusive” and create “a trust problem,” Taylor said. A newbie runner seeking helpful information about how to prepare their body for a marathon later in the year might not be ready straight away for ads featuring performance running shoes, for example, he added.
This month, Google said it had begun testing a new ad format called Direct Offers, which will let advertisers present personalized discounts to shoppers who are about to make a purchase within AI Mode. Taylor said Google is only working with a specific set of advertisers on the Direct Offers pilot and didn’t have more information about when it might become broadly available.
Direct Offers was one of several announcements Google made regarding new AI-shopping experiences. New products included a forthcoming checkout function that will let shoppers complete their purchases inside AI Mode and the Gemini app.
It looks like you may soon be able to change that old email address you made in high school.
Google account users have long been unable to change their email addresses without creating a whole new account, but Google seems to be quietly rolling out an option to update them. That’s according to a support page published by the company, which outlines a new process to change the email or username used to identify your account.
The update on Google’s account help page says certain account holders can now change their @gmail.com address without losing access to their data or services. The feature was first reported in the Google Pixel Hub Telegram group in a message that said the update is being gradually rolled out to users. As of Friday morning, the modified instructions were available on the Hindi version of Google’s support page.
The support page suggests this option is currently only available in some regions, including Hindi-speaking areas.
According to a translated version of the Hindi support page, the new email must end in @gmail.com, and it can only be changed up to three times. Once the address has been changed, it’s irreversible.
To make the change, you would visit your Google Account page, click “Personal Info,” and go to the “Email” section, according to the Telegram message.
It’s unclear when it will roll out more widely, and Google didn’t immediately respond to a request for comment from Business Insider. As of Friday morning, the English support page said usernames ending in @gmail.com usually can’t be changed.
Once the change is made, the Hindi page said, your old Gmail address will be used as an alias to receive emails. You can reuse your old Google account email address at any time, but you can’t create a new Gmail address for the next 12 months.
You can sign in to Google services like Gmail, YouTube, Google Play, or Drive with your old or new email address.