Thibault

A top researcher says a new divide is emerging in AI use — and most people are on the losing side

Are you using AI to think — or letting it think for you?

Vivienne Ming, chief scientist at the Possibility Institute, a metascience research group, and founder of Socos Labs, an AI and education firm, says the tech is splitting people into two groups: a small minority who use it to think better, and a much larger majority who use it to think less.

“The overwhelming trend is substitution,” Ming said in a recent interview with Business Insider in London. Instead of using AI to deepen their reasoning, most people are outsourcing it, she said.

That distinction is what Ming describes as a growing cognitive divide between people who use AI to enhance their thinking and those who rely on it to think for them.

As AI tools become embedded across workplaces, from coding to writing and analysis, a growing number of AI researchers have warned that overreliance on the technology could dull cognitive and independent thinking skills.

The risks are already emerging: when Anthropic’s Claude went down earlier this month, some developers said they struggled to keep working, as tasks that had become routine suddenly felt harder without AI.

‘Productive friction’

To test AI’s impact on cognitive skills, Ming said she ran an experiment from late summer through fall of 2025. She created teams of three, including 39 students from UC Berkeley and 33 others from the San Francisco Bay Area, to use Polymarket data to predict real-world events, either working alone or with AI systems.

The results, she said, showed roughly 90% to 95% of participants fell into two groups: those who relied on AI to generate answers for them, and those who used it to validate their own assumptions.

The remaining minority — around 5% to 10% — took a different approach, which Ming calls the “cyborgs.”

Rather than relying on AI for answers, they used it as a collaborator, exploring ideas, challenging assumptions, and pushing the problem forward, while the AI brought in data and counterarguments.

The process created what Ming described as “productive friction.”

“They would challenge the AI,” she said, and ask, “Don’t tell me why I’m right — tell me why I’m wrong.”

‘Hybrid intelligence’

This dynamic is what Ming calls “hybrid intelligence” — not simply humans plus machines, but a distinct form of intelligence that emerges from how the two interact.

In her research, she found that the best human-AI collaboration wasn’t driven by more advanced large language models but by human traits such as curiosity, intellectual humility, perspective-taking, and the ability to reason under uncertainty.

Her concern is that most current uses of AI push people in the opposite direction.

Ming compares it to GPS: a tool that makes your life easier in the short term but can degrade cognitive abilities over time if overused.

“If you’re using it to think for you,” Ming said of AI models, “this is your long-term cognitive health. So yes, 100% skill erasure.”

The implications extend beyond individuals. Workplaces increasingly reward speed and efficiency — conditions that encourage employees to accept AI-generated outputs rather than interrogate them.

That, Ming warned, could lead to a world of competent but indistinguishable work, or what she called “AI slop.”

“The answer you’re getting out of your phone is the exact same answer everyone else is getting,” she said. “Even if it’s right, it brings you no value.”




Source link

Image of Lakshmi Varanasi

An OpenAI researcher turned venture capitalist says investors are 3 to 5 years behind the latest AI studies

There is a yearslong lag in the AI hype cycle, according to one former AI researcher turned venture capitalist.

Jenny Xiao, who cofounded Leonis Capital in 2021 after a stint at OpenAI, said the current investment excitement around AI is far behind the actual research.

“There is a massive disconnect between what researchers are seeing and what investors are seeing,” Xiao said on the Fortune Magazine podcast this week.

What’s being discussed at the biggest AI conferences is as much as 3 to 5 years behind what researchers are thinking about, Xiao said.

“We are so behind the technical frontier, and that’s the gap I really want to bridge,” she added.

Xiao, who dropped out of a Ph.D. program in economics and AI to take a researcher role at OpenAI, founded Leonis Capital to bridge the worlds of venture capital and deep academic AI research.

“With AI, there needs to be a new generation of founders. There needs to be a new generation of VCs,” she said.

It’s also the first time investors need to be able to provide financial support to both the market and the technology, she added. Unlike SaaS companies, which were built on a “stable tech stack,” AI is moving fast. To keep up, Xiao said investors are going to need to be as technical as the founders.

If she has one piece of advice for investors who haven’t gone deep into the technical side, it’s that they should know “AI progress isn’t linear,” she said.

They should know AI progress happens in “lumps,” she said. So, questions about why AI progress is slowing down or speeding up aren’t the best way to characterize the rate of development.

“It’s neither of those two extremes,” she said. “It’s somewhere in between.”

Leonis Capital did not immediately respond to a request for comment from Business Insider.




Source link

AI-is-creating-a-security-problem-most-companies-arent-staffed.jpeg

AI is creating a security problem most companies aren’t staffed to handle, says an AI researcher

Companies may have cybersecurity teams in place, but many still aren’t prepared for how AI systems actually fail, says an AI security researcher.

Sander Schulhoff, who wrote one of the earliest prompt engineering guides and focuses on AI system vulnerabilities, said on an episode of “Lenny’s Podcast” published Sunday that many organizations lack the talent needed to understand and fix AI security risks.

Traditional cybersecurity teams are trained to patch bugs and address known vulnerabilities, but AI doesn’t behave that way.

“You can patch a bug, but you can’t patch a brain,” Schulhoff said, describing what he sees as a mismatch between how security teams think and how large language models fail.

“There’s this disconnect about how AI works compared to classical cybersecurity,” he added.

That gap shows up in real-world deployments. Cybersecurity professionals may review an AI system for technical flaws without asking: “What if someone tricks the AI into doing something it shouldn’t?” said Schulhoff, who runs a prompt engineering platform and an AI red-teaming hackathon.

Unlike traditional software, AI systems can be manipulated through language and indirect instructions, he added.

Schulhoff said people with experience in both AI security and cybersecurity would know what to do if an AI model is tricked into generating malicious code. For example, they would run the code in a container and ensure the AI’s output doesn’t affect the rest of the system.

The intersection of AI security and traditional cybersecurity is where “the security jobs of the future are,” he added.

The rise of AI security startups

Schulhoff also said that many AI security startups are pitching guardrails that don’t offer real protection. Because AI systems can be manipulated in countless ways, claims that these tools can “catch everything” are misleading.

“That’s a complete lie,” he said, adding that there would be a market correction in which “the revenue just completely dries up for these guardrails and automated red-teaming companies.”

AI security startups have been riding the wave of investor interest. Big Tech and venture capital firms have poured money into the space as companies rush to secure AI systems.

In March, Google bought cybersecurity startup Wiz for $32 billion, a deal aimed at strengthening its cloud security business.

Google CEO Sundar Pichai said AI was introducing “new risks” at a time when multi-cloud and hybrid setups are becoming more common.

“Against this backdrop, organizations are looking for cybersecurity solutions that improve cloud security and span multiple clouds,” he added.

Business Insider reported last year that growing security concerns around AI models have helped fuel a wave of startups pitching tools to monitor, test, and secure AI systems.




Source link

OpenAIs-chief-researcher-says-Mark-Zuckerberg-hand-delivered-soup-to-an.jpeg

OpenAI’s chief researcher says Mark Zuckerberg ‘hand-delivered soup’ to an employee in a recruiting effort

It’s been said that the way to one’s heart is through their stomach. It sounds like Meta CEO Mark Zuckerberg wanted to see if the AI talent war, or at least one skirmish, could be won the same way.

Mark Chen, chief research officer at OpenAI, recently said that Zuckerberg personally delivered homemade soup to an OpenAI employee as part of a campaign to recruit the unnamed worker to Meta.

“It’s been kind of interesting and fun to see it escalate over time. You know, some interesting stories here are Zuck actually went and hand-delivered soup to people that he was trying to recruit from us,” Chen told Ashlee Vance on the author’s “Core Memory” podcast.

Chen said Zuckerberg’s move was “shocking to me at the time” but since then, he said he’s returned the favor.

“I’ve also delivered soup to people we’ve been recruiting from Meta,” Chen said, laughing.

The poaching efforts focused on OpenAI’s researchers and engineers underscores the company’s position in the AI race, Chen said.

“We’re always under attack,” Chen told Vance. “This is how I know we’re in the lead, right? Any company starts, where do they try to recruit from? It’s OpenAI. They want the expertise, they want our vision, our philosophy of the world. And we’ve made so many star researchers, right? I think OpenAI, more than anywhere else, has been a place that makes names in AI today.”

Arguably, no other rival tech company has been as aggressive in the so-called AI talent wars against OpenAI as Zuckerberg’s Meta.

In June, OpenAI CEO Sam Altman said that Meta tried to lure some of his engineers with $100 million signing bonuses. The CEO said at the time that none of his top talent was poached, but ChatGPT co-creator Shengjia Zhao later joined Meta’s Superintelligence Lab.

Chen said that Meta tried to recruit “half” of Chen’s direct reports unsuccessfully, but that OpenAI has been “fairly good” at retaining top talent. A Meta spokesperson declined to comment.

Top AI researchers have become a hot commodity in the AI race, as it’s generally believed that there is a relatively small number of researchers and engineers capable of achieving breakthroughs or building new LLMs from the ground up.

“It’s like looking for LeBron James,” Databricks’ vice president of AI, Naveen Rao, told The Verge’s Command Line newsletter last year. “There are just not very many humans who are capable of that.”




Source link