OpenAI-just-hired-another-employee-from-Mira-Muratis-Thinking-Machines.jpeg

OpenAI just hired another employee from Mira Murati’s Thinking Machines Lab

Another employee at Thinking Machines Lab is leaving to rejoin OpenAI.

It’s the latest in a string of departures from the $12 billion AI startup, which is led by former OpenAI CTO Mira Murati and lately has been the subject of high-profile poaching campaigns from bigger tech companies.

The latest employee to go back to OpenAI is Jolene Parish, who joined Thinking Machines Lab in April last year, according to her LinkedIn profile. She had worked at OpenAI for three years prior. Before that, she worked for 10 years on security at Apple, her profile says.

Other employees rejoined OpenAI last month. Two co-founders, former CTO Barret Zoph and Luke Metz, both left, along with researcher Sam Schoenholz.

Lia Guy, another researcher, also rejoined OpenAI, The Information reported. Another cofounder, Andrew Tulloch, left for Meta late last year, The Wall Street Journal reported.

OpenAI and Thinking Machines Lab declined to comment.

Thinking Machines Lab raised a monster $2 billion funding round last year, valuing the company at $12 billion, spokespeople said at the time. The startup launched its first product, Tinker, last October.

The San Francisco-based company has become known for attracting star-studded talent. It quietly hired Neal Wu, a legendary coder who won three gold medals in an Olympiad for programming, and Soumith Chintala, the creator of the open-source AI project PyTorch at Meta, who is now Thinking Machines Lab’s CTO, Business Insider previously reported.

Have a tip? Contact this reporter via email at crollet@businessinsider.com or on Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; heres our guide to sharing information securely.




Source link

Shuby headshot

DeepMind’s CEO says using AI can make you a genius — or hurt your critical thinking skills

It’s up to you whether AI makes you sharper or slowly dulls your brain, says Demis Hassabis.

In a Thursday interview with entrepreneur Varun Mayya on the sidelines of the India AI Impact Summit, the Google DeepMind CEO said that AI is just like the internet. People can use it to learn all kinds of topics, or use it in ways that “degrade” their thinking.

“With AI, if you use it in a lazy way, it will make you worse at critical thinking and so on,” he said. “But that’s down to you as the individual. No one can help you do that.”

He added that people need to be smart and use these technologies in ways that enhance their thinking rather than dull it.

Hassabis cofounded DeepMind in 2010, which Google acquired in 2014. It merged with Google Brain in 2023 to form Google DeepMind, the lab behind tools such as Gemini and Nano Banana. The CEO and a DeepMind coworker, John Jumper, were awarded the 2024 Nobel Prize for Chemistry for their work on protein structure prediction.

As AI gets incorporated into daily life, debates about its risks and rewards have intensified, with several tech leaders warning about the dangers of an overreliance on AI tools.

Earlier this week, tech billionaire Mark Cuban said that there are two types of people who use AI.

“There are generally 2 types of LLM users, those that use it to learn everything, and those that use it so they don’t have to learn anything,” Cuban said of large language models in an X post on Tuesday.

Cuban has previously said that AI models can’t provide all the answers and are “stupid” but like “a savant that remembers everything.”

At a June conference, the CEO of French AI lab Mistral said that a risk of using AI for everything is that humans will stop trying.

“The biggest risk with AI is not that it will outsmart us or become uncontrollable, but that it will make us too comfortable, too dependent, and ultimately too lazy to think or act for ourselves,” Arthur Mensch said.




Source link

A-Nobel-Prize-winning-physicist-explains-how-to-use-AI-without.jpeg

A Nobel Prize-winning physicist explains how to use AI without letting it replace your thinking

Think AI makes you smarter?

Probably not, according to Saul Perlmutter, a Nobel Prize-winning physicist who was credited for discovering that the universe’s expansion is accelerating.

He said AI’s biggest danger is psychological: it can give people the illusion they understand something when they don’t, weakening judgment just as the technology becomes more embedded in our daily work and learning.

“The tricky thing about AI is that it can give the impression that you’ve actually learned the basics before you really have,” Perlmutter said on a podcast episode with Nicolai Tangen, CEO of Norges Bank Investment Group, on Wednesday.

“There’s a little danger that students may find themselves just relying on it a little bit too soon before they know how to do the intellectual work themselves,” he added.

Rather than rejecting AI outright, Perlmutter said the answer is to treat it as a tool — one that supports thinking instead of doing it for you.

Use AI as a tool — not a substitute

Perlmutter said that AI can be powerful — but only if users already know how to think critically.

“The positive is that when you know all these different tools and approaches to how to think about a problem, AI can often help you find the bit of information that you need,” he said.

At UC Berkeley, where Perlmutter teaches, he and his colleagues developed a critical-thinking course centered on scientific reasoning, including probabilistic thinking, error-checking, skepticism, and structured disagreement, taught through games, exercises, and discussion designed to make those habits automatic in everyday decisions.

“I’m asking the students to think very hard about how would you use AI to make it easier to actually operationalize this concept — to really use it in your day-to-day life,” he said.

The confidence problem

One of Perlmutter’s concerns is that AI often speaks with far more certainty than it deserves and can be “overly confident” in what it says.

The challenge, Perlmutter said, is that AI’s confident tone can short-circuit skepticism, making people more likely to accept its answers at face value rather than question whether they’re correct.

That confidence, he said, mirrors one of the most dangerous human cognitive biases: trusting information that appears authoritative or confirms our existing beliefs.

To counter that instinct, Perlmutter said people should evaluate AI outputs the same way they would any human claim — weighing credibility, uncertainty, and the possibility of error rather than accepting answers at face value.

Learning to catch when you’re being fooled

In science, Perlmutter said, researchers assume they are making mistakes and build systems to catch them. For example, scientists hide their results from themselves, he said, until they’ve exhaustively checked for errors, thereby reducing confirmation bias.

The same mindset applies to AI, he added.

“Many of [these concepts] are just tools for thinking about where are we getting fooled,” he said. “We can be fooling ourselves, the AI could be fooling itself, and then could fool us.”

That’s why AI literacy also involves knowing when not to trust the output, he said — and being comfortable with uncertainty, rather than treating AI outputs as absolute truth.

Still, Perlmutter is clear that this isn’t a problem with a permanent solution.

“AI will be changing,” he said, “and we’ll have to keep asking ourselves: is it helping us, or are we getting fooled more often? Are we letting ourselves get fooled?”




Source link