Thibault

A top researcher says a new divide is emerging in AI use — and most people are on the losing side

Are you using AI to think — or letting it think for you?

Vivienne Ming, chief scientist at the Possibility Institute, a metascience research group, and founder of Socos Labs, an AI and education firm, says the tech is splitting people into two groups: a small minority who use it to think better, and a much larger majority who use it to think less.

“The overwhelming trend is substitution,” Ming said in a recent interview with Business Insider in London. Instead of using AI to deepen their reasoning, most people are outsourcing it, she said.

That distinction is what Ming describes as a growing cognitive divide between people who use AI to enhance their thinking and those who rely on it to think for them.

As AI tools become embedded across workplaces, from coding to writing and analysis, a growing number of AI researchers have warned that overreliance on the technology could dull cognitive and independent thinking skills.

The risks are already emerging: when Anthropic’s Claude went down earlier this month, some developers said they struggled to keep working, as tasks that had become routine suddenly felt harder without AI.

‘Productive friction’

To test AI’s impact on cognitive skills, Ming said she ran an experiment from late summer through fall of 2025. She created teams of three, including 39 students from UC Berkeley and 33 others from the San Francisco Bay Area, to use Polymarket data to predict real-world events, either working alone or with AI systems.

The results, she said, showed roughly 90% to 95% of participants fell into two groups: those who relied on AI to generate answers for them, and those who used it to validate their own assumptions.

The remaining minority — around 5% to 10% — took a different approach, which Ming calls the “cyborgs.”

Rather than relying on AI for answers, they used it as a collaborator, exploring ideas, challenging assumptions, and pushing the problem forward, while the AI brought in data and counterarguments.

The process created what Ming described as “productive friction.”

“They would challenge the AI,” she said, and ask, “Don’t tell me why I’m right — tell me why I’m wrong.”

‘Hybrid intelligence’

This dynamic is what Ming calls “hybrid intelligence” — not simply humans plus machines, but a distinct form of intelligence that emerges from how the two interact.

In her research, she found that the best human-AI collaboration wasn’t driven by more advanced large language models but by human traits such as curiosity, intellectual humility, perspective-taking, and the ability to reason under uncertainty.

Her concern is that most current uses of AI push people in the opposite direction.

Ming compares it to GPS: a tool that makes your life easier in the short term but can degrade cognitive abilities over time if overused.

“If you’re using it to think for you,” Ming said of AI models, “this is your long-term cognitive health. So yes, 100% skill erasure.”

The implications extend beyond individuals. Workplaces increasingly reward speed and efficiency — conditions that encourage employees to accept AI-generated outputs rather than interrogate them.

That, Ming warned, could lead to a world of competent but indistinguishable work, or what she called “AI slop.”

“The answer you’re getting out of your phone is the exact same answer everyone else is getting,” she said. “Even if it’s right, it brings you no value.”




Source link

Henry Chandonnet is pictured

The creator of Anthropic’s Claude Code likes to hire engineers who do ‘side quests’ like making kombucha

Want a job at Anthropic? It might help to get a hobby.

The AI boom is changing the job requirements for an engineer. Not only do they need to have coding skills, but they also must know how to operate vibecoding tools and stay up to date with new AI models.

Anthropic leader Boris Cherny looks for something else: “Side quests.”

“When I hire engineers, this is definitely something I look for,” he said on “The Peterman Pod.”

Cherny’s definition of side quests includes “cool weekend projects,” like someone who’s “really into making kombucha.” It’s a sign that the engineer is curious and interested in other things, he said.

Much of Cherny’s own growth came from his side projects. Cherny is now a key figure at Anthropic. He created Claude Code, a tool that is now popular with engineers across the country.

“These are well-rounded people,” he said. “These are the kind of people I enjoy working with.”

Cherny also said he prefers that his new hires be “generalists.”

He gave the example of an engineer who can code, but is also able to work on product and design. That all-star engineer also seeks out user feedback.

“This is how we recruit for all functions, now,” he said. “Our project managers code, our data scientists code, our user researcher codes a little bit.”

Cherny isn’t alone in pushing for jobs to become more generalist. Figma CEO Dylan Field said in October that AI was causing job titles to merge, resulting in everyone being a “product builder.”

What else is Anthropic looking for? For some time, it monitored whether candidates use AI in their applications.

In May, Business Insider reported that Anthropic asked candidates for certain jobs not to use AI in their written responses so the company could test their “non-AI-assisted communication skills.”

Anthropic changed its policy in July, allowing candidates to seek out assistance from Claude.

For the younger engineers, a job at Anthropic may be hard to come by. In May, CPO Mike Krieger said on “Hard Fork” that he was focused on hiring experienced engineers — and had “some hesitancy” with entry-level workers.

On the podcast, Cherny said that his love of generalists came from his career trajectory. Working at startups since 18, Cherny had to do everything, he said.

“At big companies, you get forced into this particular swim lane,” he said. “It’s just so artificial.”




Source link