Judge-throws-out-Sam-Altmans-sisters-lawsuit-accusing-him-of.jpeg

Judge throws out Sam Altman’s sister’s lawsuit accusing him of sexual abuse— but leaves door open to refile

A federal judge has tossed a lawsuit brought by the sister of OpenAI CEO Sam Altman alleging he sexually abused her as a child.

Annie Altman’s allegations were brought too late for her lawsuit to survive, US District Judge Zachary Bluestone ruled Friday.

Bluestone, however, gave Annie Altman’s lawsuit a lifeline by allowing her to refile it using Missouri’s particular Childhood Sexual Abuse law, which offers an extended statute of limitations. He gave her until April 3 to file a new lawsuit.

In the same Friday order, the judge greenlit a countersuit from Sam Altman accusing Annie of defamation and of abusing the legal process.

Annie Altman filed her lawsuit in a federal court in Missouri in January 2025. She alleged that Sam Altman, who is nine years older than her, sexually abused her between 1997 and 2006, beginning when she was three years old, when they were growing up in Clayton, Missouri.

Sam Altman has denied the allegations. In a public statement following the filing of the lawsuit, he, his two younger brothers, and their mother said Annie Altman had “mental health issues.”

Sam Altman’s countersuit said that his sister made up the allegations of sexual abuse — posting them on social media as well as including them in her lawsuit — after their family declined her “demands for unrestricted financial support” in light of her mental health issues.

“One of the only points of agreement in this case is that the claims are disturbing — particularly so if they prove true but nonetheless unfortunate if not,” Bluestone wrote in his Friday ruling.

An attorney for Sam Altman declined to comment for this story. Attorneys for Annie Altman didn’t immediately respond to Business Insider’s requests for comment.




Source link

5-big-takeaways-from-Sam-Altmans-Saturday-night-AMA-on.jpeg

5 big takeaways from Sam Altman’s Saturday night AMA on OpenAI’s Pentagon deal

  • Sam Altman went on X on Saturday night and told users to ask him anything about OpenAI’s Pentagon deal.
  • Altman on Friday night announced that OpenAI will work with the Pentagon and let it use its AI models.
  • Here are five big takeaways from Altman’s AMA session.

Sam Altman hopped onto X on Saturday night and told users to ask him anything about OpenAI’s agreement with the Pentagon.

Altman, late on Friday, announced that his company had finalized a deal with the Department of War to use its AI models. OpenAI’s deal came after Anthropic refused an ultimatum regarding the terms of use of its frontier model, Claude, for deployment in mass domestic surveillance and fully autonomous weapons.

Here are 5 big takeaways from Altman’s AMA.

The OpenAI-Pentagon deal was ‘rushed,’ and Altman knows the ‘optics’ don’t look good

The Pentagon deal was done quickly in “an attempt to de-escalate the situation,” Altman wrote on X.

He added in a separate post that the deal had been “rushed.”

Still, the “optics don’t look good” for OpenAI, he wrote.

“If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” he wrote.

“If not, we will continue to be characterized as rushed and uncareful,” he wrote.

Altman added that he sees “promising signs” for where this will all land for OpenAI.

OpenAI took the Pentagon deal because it ‘got comfortable’ with the ‘contract language’

Altman was asked why the Department of War went with OpenAI over Anthropic. He said he wouldn’t speak for his competitor, but did speculate on why OpenAI got the contract inked first.

“First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one,” Altman wrote. “I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.”

He added that OpenAI and the Department of War “got comfortable with the contractual language” as well.

“I think Anthropic may have wanted more operational control than we did,” he added.

OpenAI has 3 redlines, but it’s open to changing them as tech evolves

Altman said that OpenAI has “three redlines.” But those redlines could change — and there could be more of them put in place — as the technology evolves, and “new risks” come into play.

“But a really important point: we are not elected. We have a democratic process where we do elect our leaders,” Altman wrote. “We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn’t ethical in the most important areas.”

“Seems fine for us to decide how ChatGPT should respond to a controversial question,” he added. “But I really don’t want us to decide what to do if a nuke is coming towards the US.”

Altman says Anthropic is on a ‘dangerous’ path

Altman said OpenAI had been talking to the Department of War for “many months” about non-classified work, before “things shifted into high gear on the classified side.”

“We found the DoW to be flexible on what we needed, and we want to support them in their very important mission,” he wrote.

“I think the current path things are on is dangerous for Anthropic, healthy competition, and the US,” Altman wrote on X as well. “We negotiated to make sure similar terms would be offered to all other AI labs.”

He also asked for “some empathy” for the Department of War, given its “extremely important mission.”

And, in Altman’s words:

Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.”

And then we say

“But we won’t help you, and we think you are kind of evil.”

I don’t think I’d react great in that situation.

I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Altman says AI can help counter big security threats on two fronts

Altman says AI could come in useful on two fronts. Firstly, the US’s “ability to defend against major cyber attacks,” particularly, an attack that might take down the country’s electrical grid.

Secondly, biosecurity is an area where AI could help.

“I do not think we are currently set up well enough to detect and respond to a novel pandemic threat,” Altman said.




Source link

Sam-Altman-says-concerns-of-ChatGPTs-energy-use-are-overblown.jpeg

Sam Altman says concerns of ChatGPT’s energy use are overblown: ‘It also takes a lot of energy to train a human’

Sam Altman is pushing back on the idea that ChatGPT consumes too much energy.

“One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query,” Altman told The Indian Express last week on the sidelines of a major AI summit. “But it also takes a lot of energy to train a human.”

Altman suggested it’s not an apples-to-apples comparison, arguing that it’s unfair to discount the years spent nurturing and educating someone to be capable of making their own inquiries.

“It takes a lot of energy to train a human,” he said, prompting some laughter in the crowd. “It takes, like, 20 years of life, and all of the food you eat during that time before you get smart.”

Altman said the clock really began thousands of years ago.

“It took, like, the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science or whatever,” he said.

Altman also called out what he said were “totally insane” claims on the internet that OpenAI is guzzling down water to power ChatGPT.

“Water is totally fake,” Altman said, when asked about concerns AI companies use too much water. “It used to be true, we used to do evaporative cooling in data centers, but now that we don’t do that, you know, you see these like things on the internet where, ‘Don’t use ChatGPT, it’s 17 gallons of water for each query’ or whatever.”

In June, Altman said that the average ChatGPT query consumes roughly the amount of energy needed to power a lightbulb for a few minutes.

“People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,” he wrote on X.

Altman said it is fair as a whole to point out the AI industry’s overall energy consumption because of the large growth in usage. He said it’s why he and other AI CEOs have pushed alternative energy sources like solar, wind, and nuclear.

Unlike other CEOs, namely xAI’s Elon Musk, Altman is dismissive of the idea that space-based data centers are realistic in the next decade, a concept that some companies have floated as a way to reduce energy consumption.

Outside of OpenAI, Altman is a major investor in nuclear energy. He previously served as chairman of Oklo, a nuclear energy startup, and has been a major backer of Helion, which plans to build what it calls “the world’s first fusion power plant” in Washington state.

In the US, data center energy consumption is becoming a major topic. Last month, President Donald Trump said he was working with tech companies on “a commitment to the American people” to ensure that citizens don’t pay higher energy bills because of a nearby data center.

Consulting firm McKinsey & Company estimated last year that data centers could account for 14% of total power demand in the US by 2050.




Source link

Sam-Altman-says-Elon-Musks-idea-of-putting-data-centers.jpeg

Sam Altman says Elon Musk’s idea of putting data centers in space is ‘ridiculous’

SpaceX CEO Elon Musk and OpenAI CEO Sam Altman famously don’t agree on much.

The latest point of contention: data centers in space. Musk has made it a priority. Altman thinks it’s a fantasy, at least for now.

“I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” Altman said during a live interview with local media in New Delhi on Friday, causing audience members to laugh.

Altman said that orbital data centers could “make sense someday,” but factors like launch costs and the difficulty of repairing a computer chip in space remain overwhelming obstacles.

“We are not there yet,” Altman added. “There will come a time. Space is great for a lot of things. Orbital data centers are not something that’s going to matter at scale this decade.”

Musk would almost certainly disagree.

While many Big Tech and AI companies are spending billions on data center construction on Earth, Musk’s eyes are on the stars, per usual. Orbital data centers are his latest ambition, as he mentioned in an all-hands xAI meeting in December.

In February, SpaceX said its goal is to launch a “constellation of a million satellites that operate as orbital data centers.” The company has already begun hiring engineers to make that happen.

During an all-hands meeting with xAI employees this month, Musk said SpaceX’s acquisition of xAI will allow them to deploy the orbital data centers faster.

Despite Altman’s skepticism, other tech leaders are also racing to place data centers in space. Google’s Project Suncatcher, unveiled in November 2025, aims to do just that. Google CEO Sundar Pichai told Fox News Sunday the company could start placing data centers — powered by the sun — in space as early as 2027.

Tech and AI companies rely on data centers to power their products, like large language models and chatbots. Those data centers, however, can deplete water resources, strain power grids, increase pollution, and decrease the overall quality of life.

An investigation by Business Insider published last year found that over 1,200 data centers had been approved for construction across the US by the end of 2024, nearly four times the number from 2010.

Now, proposed data center campuses in Texas, Oklahoma, and elsewhere are increasingly facing stiff resistance from local communities.




Source link

Jake-Paul-says-Sam-Altman-taught-him-the-value-of.jpeg

Jake Paul says Sam Altman taught him the value of a 15-minute meeting

Jake Paul was a firebrand YouTuber. Then he was an NFT merchant, and a betting site operator. Now, Paul is a professional boxer — and venture capitalist. And he’s learning from one of the biggest names in tech.

On “Sourcery,” Paul said that he met OpenAI CEO Sam Altman while sitting next to each other at President Donald Trump’s inauguration.

“Sam likes fast cars, and so do I,” Paul said. “So, we just started talking about cars, and then we got along, and that was really it.”

Paul’s Anti Fund — which is also led by his brother Logan and longtime founder Geoffrey Woo — invested in OpenAI in 2025. The biggest lesson he’s learned from Altman is efficiency, Paul said.

He described the quick-and-tidy meetings that Altman runs. The OpenAI CEO “walks into the room, sits down, let’s get right into the conversation, boom boom boom,” he said.

In 15 minutes alone, Altman was “hella productive,” Paul said. Then, Altman can go on to his next meeting and do it all over again.

“We’ll do hourlong meetings or calls and just waste time,” Paul said. “I think that was inspiring because time is the most valuable thing, and it’s the only reason you can’t accomplish more.”

Indeed, Altman has long opted for the 15-minute meeting. In a 2018 blog post, he wrote that the ideal meeting time is either around 15 to 20 minutes or 2 hours, but “the default of 1 hour is usually wrong.”

Paul has worked closely with OpenAI in the last year, beyond participating in fundraising.

Remember all of those strange Paul memes running around the internet during the Sora 2 launch? They were by design. Paul said he helped consult on the project and was one of the first to sign over his name, image, and likeness.

Woo also appeared on the podcast, and spelled out the thinking behind those far-out memes (such as an AI Paul declaring he was gay). “It was not something that was like, ‘Hey, Jake Paul is now gay.’ Jake was thoughtful in terms of why we were part of that launch.”

Woo also said that he had formed a good friendship with Altman and Mark Chen, OpenAI’s chief research officer.

For the Sora 2 launch, Paul said that he had “regular calls” with OpenAI and offered “super detailed consulting.”

“Me and my brother have however many years combined of social media experience since the beginning,” Paul said. “We were there when the term ‘influencer’ was even made up.”

This background, Paul said, helped him give good advice on what OpenAI’s social media-like interface should look like. He advised on both what creators and audiences wanted, he said.

Anti Fund closed its $30 million fund in September. Other investments include defense tech startup Anduril and prediction market Polymarket.

Woo said their ties to OpenAI remain strong. “We were just at OpenAI for three hours looking for other ways to collaborate,” he said. “Things might be cooking.”




Source link

Sam-Altman-says-OpenClaw-creator-Peter-Steinberger-is-joining-OpenAI.jpeg

Sam Altman says OpenClaw creator Peter Steinberger is joining OpenAI to build next-gen personal agents

  • Sam Altman says OpenClaw creator Peter Steinberger is joining OpenAI.
  • OpenClaw is a viral AI agent launched last month.
  • Altman said Steinberger will build “next generation” AI agents at OpenAI.

OpenAI just scored a win in the AI talent wars.

Sam Altman said Sunday on X that Peter Steinberger, the creator of OpenClaw, the viral AI agent powering the agent-only social network Moltbook, is joining OpenAI.

Altman said Steinberger would build the “next generation” of personal AI agents at the company.

“He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people,” Altman said about Steinberger. “We expect this will quickly become core to our product offerings.”

Altman added that OpenClaw, which was for a brief moment in time known as Moltbot and then Clawdbot before Anthropic took notice, will live on as an open-source project supported by OpenAI.

“The future is going to be extremely multi-agent and it’s important to us to support open source as part of that,” he wrote.

Steinberger, previously best known for founding the PDF processing company PSPDFKit, came out of retirement to launch OpenClaw in late 2025.

He is likely to bring a new perspective to OpenAI’s race to develop artificial general intelligence. Steinberger said he believes AGI is best as a specialized form of intelligence rather than a generalized one.

“What can one human being actually achieve? Do you think one human being could make an iPhone or one human being could go to space?” Steinberger said on a Y Combinator podcast in February. “As a group we specialize, as a larger society we specialize even more.”




Source link

Sam-Altman-says-he-cant-wait-to-get-Elon-Musk.jpeg

Sam Altman says he can’t wait to get Elon Musk under oath

  • Sam Altman said he’s “really excited” to get Elon Musk under oath.
  • Their case will go to trial in April, a California judge said in January.
  • Musk has accused OpenAI and Altman of misleading him into thinking it would remain a nonprofit.

Sam Altman is pumped to take on Elon Musk in court.

“Really excited to get Elon under oath in a few months, Christmas in April!” the OpenAI CEO said in a Tuesday evening X post.

He also reposted his chief security officer Jason Kwon’s X post, with the caption “concerning.”

The post contained screenshots of a court filing from OpenAI’s attorneys, which said that Musk preferred using messaging apps like Signal or XChat with message retention settings of a week or less.

Altman and Musk took their yearslong public feud to the next level in 2024. Musk, who is Tesla and SpaceX’s CEO, launched a lawsuit against OpenAI and Altman in February 2024, accusing Altman of jeopardizing its nonprofit mission.

Musk said that he contributed $38 million to OpenAI, thinking it would remain a nonprofit. He was one of the company’s founders, along with Altman, PayPal cofounder Peter Thiel, and others.

Despite OpenAI’s attorneys’ attempts to have the case thrown out, a California judge said in a January hearing that there was enough evidence to go to trial, which is set for April.

The billionaire duo have been trading barbs on social media. Musk attacked OpenAI’s ChatGPT on January 20, writing “Don’t let your loved ones use ChatGPT.” He was responding to an X post alleging that the chatbot has been linked to multiple deaths since 2022.

Altman responded to Musk’s post, slamming Tesla’s Autopilot system as unsafe, and questioning xAI’s Grok chatbot. Grok has faced criticism from governments in several countries after reports of Grok users uploading pictures of women and minors and asking the chatbot to undress them.

Representatives for Musk and Altman did not respond to requests for comment from Business Insider.




Source link

Chong Ming Lee, Junior News Reporter at Business Insider's Singapore bureau.

Jensen Huang says Nvidia would love to back an OpenAI IPO, and there’s ‘no drama’ with Sam Altman

Jensen Huang says Nvidia would love to invest in a future OpenAI IPO.

Huang said in an interview on CNBC’s “Mad Money” on Tuesday that there was “no drama” between Nvidia and OpenAI CEO Sam Altman, pushing back against recent chatter of tension in the relationship between the two companies.

“The first deal is on,” the Nvidia CEO said, referring to the company’s September deal with OpenAI, under which the company said it planned to invest up to $100 billion in the AI startup.

“​​And then there’s, of course, an IPO in the future,” he added. “We love to be participating in that as well,” he added.

Huang also described OpenAI as a “once in a generation company” and said Nvidia is “delighted to invest in it.”

His comments come amid reports suggesting internal unease around the deal.

The Wall Street Journal reported on Saturday that the investment had sparked internal concerns at Nvidia, with some executives questioning the deal, according to people familiar with the matter.

Separately, Reuters reported on Tuesday that OpenAI had been unhappy with certain newer Nvidia chips and had looked at alternatives since last year, citing people familiar with the matter.

Huang told reporters in Taipei on Saturday that speculation of any dissatisfaction with OpenAI was “nonsense.”

“We will invest a great deal of money, probably the largest investment we’ve ever made,” he added.

Altman has also pushed back on rumors of tension.

“We love working with NVIDIA and they make the best AI chips in the world,” wrote Altman in a post on X on Tuesday.

“We hope to be a gigantic customer for a very long time. I don’t get where all this insanity is coming from,” he added.

OpenAI is one of the world’s most valuable private AI companies and a major customer for Nvidia’s chips, which power the training and deployment of large language models.

The startup has not announced plans for an IPO, but its fundraising and computing needs have fueled speculation about how it will finance future growth.

“Big Short” investor Michael Burry said in a Substack exchange in January that he was surprised that ChatGPT “kicked off a multi-trillion-dollar infrastructure race.”

“It’s like someone built a prototype robot and every business in the world started investing for a robot future,” he wrote.




Source link

Read-Sam-Altmans-internal-Slack-message-to-employees-saying-ICE.jpeg

Read Sam Altman’s internal Slack message to employees saying ICE ‘is going too far’

Being patriotic means you also need to call out “overreach” when you see it, Sam Altman privately told OpenAI employees in a message that said Immigration and Customs Enforcement had gone “too far.”

“I love the US and its values of democracy and freedom and will be supportive of the country however I can; OpenAI will too,” the OpenAI CEO wrote in an internal Slack message. “But part of loving the country is the American duty to push back against overreach. What’s happening with ICE is going too far.”

OpenAI employees responded positively to Altman’s message on Slack, including heart and thank-you emojis.

Altman’s message, which was first reported by The New York Times’ Dealbook newsletter, comes as CEO and tech leaders face internal and external pressures in the wake of the deadly Border Patrol shooting of Alex Pretti on Saturday. Pretti is the second person to be fatally shot by federal law enforcement amid a surge in immigration enforcement in and around Minneapolis.

Altman also praised Trump’s leadership in his message and expressed hope that the president could cool tensions — the latest example of a CEO attempting to balance being critical of actions tied to the Trump administration’s policies while also staying on the president’s good side.

“President Trump is a very strong leader, and I hope he will rise to this moment and unite the country,” Altman wrote. “I am encouraged by the last few hours of response and hope to see trust rebuilt with transparent investigations.”

As a general principle, Altman wrote that OpenAI tries to “stick to our convictions and not get blown around by changing fashions too much.”

On Monday, the White House appeared to be recalibrating its response in the wake of significant criticism, including from some congressional Republicans.

White House press secretary Karoline Leavitt declined to associate Trump with Homeland Security Secretary Kristi Noem and White House advisor Stephen Miller’s initial statements that Pretti was trying to commit domestic terrorism.

Read Sam Altman’s message to employees

I love the US and its values of democracy and freedom and will be supportive of the country however I can; OpenAI will too. But part of loving the country is the American duty to push back against overreach. What’s happening with ICE is going too far. There is a big difference between deporting violent criminals and what’s happening now, and we need to get the distinction right.
President Trump is a very strong leader, and I hope he will rise to this moment and unite the country. I am encouraged by the last few hours of response and hope to see trust rebuilt with transparent investigations.
As a company, we aim to stick to our convictions and not get blown around by changing fashions too much. We didn’t become super woke when that was popular, we didn’t start talking about masculine corporate energy when that was popular, and we are not going to make a lot of performative statements now about safety or politics or anything else. But we are going to continue to try to figure out how to actually do the right thing as best as we can, engage with leaders and push for our values, and speak up clearly about it as needed.

Correction: January 27, 2026 — Alex Pretti was fatally shot by Border Patrol, not ICE.

Do you work at OpenAI? Contact the reporter from a non-work email and device at bgriffiths@businessinsider.com




Source link