Natalie Musumeci's face on a gray background

Lawsuit against Elon Musk’s xAI alleges Grok created sexualized deepfakes of 3 minors

A new lawsuit against Elon Musk’s xAI alleges that its flagship chatbot, Grok, was used to create sexualized deepfake images of three minors — content the complaint says amounts to child sexual abuse material.

The proposed class action, filed Monday in a California federal court, accuses the AI startup of profiting from the “sexual predation of real people, including children.”

“Nearly all the companies creating, marketing, and selling AI recognized the dangers of such a tool and chose to enact industry-standard guardrails that would prevent the use of their products by one extremely dangerous group: child sex predators. XAI did not,” the lawsuit says.

Representatives for Musk and xAI did not immediately respond to a request for comment by Business Insider.

Musk previously said in a January post on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

Grok generates images based on user prompts, he wrote.

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” Musk said. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

The lawsuit against xAI says that the Tennessee plaintiffs are “three of the minor victims of xAI’s knowing production, possession, and distribution of AI-generated child sexual abuse material” depicting them.

The plaintiffs, identified in the court papers only as Jane Doe 1, Jane Doe 2, and Jane Doe 3, allege that xAI’s AI tools were used to make nude images and videos of them. Jane Doe 1, a recent adult, was a minor at the time of the alleged incidents, while the two other plaintiffs are still minors, according to the lawsuit.

In December 2025, Jane Doe 1 received a message from an anonymous Instagram account warning her that “pics” of her had been generated by someone she knew and spread across the group-chat platform Discord.

“Through a series of messages, the anonymous user went on to explain that the perpetrator had uploaded a folder of image and video files depicting her and other minor females to Discord,” the lawsuit says.

The anonymous user eventually sent Jane Doe 1 several sexualized AI-generated images and videos of her and other minor girls, according to the lawsuit.

“At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses,” the lawsuit says. “The images showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude.”

Jane Doe 1 alerted the other minors in the images and their families, and a criminal investigation was opened in Tennessee, according to the lawsuit. It added that local police arrested a suspect in connection with the case in December 2025.

Last month, Jane Doe 2 learned through the investigation that at least two of her images had also been used to produce sexually explicit AI-generated content using xAI’s tools, the lawsuit says.

Local law enforcement told the girl and her mother that one image taken of the girl on the beach in a blue bikini had been “morphed to depict her without any clothes,” the lawsuit says.

Authorities also informed Jane Doe 3 that the AI-generated images recovered from the suspect’s phone included one that had been “morphed to depict her fully nude,” according to the lawsuit.

Vanessa Baehr-Jones, an attorney for the plaintiffs, told Business Insider that her clients have endured a “nightmare.”

“Their CSAM images and videos depicting them when they were minors are now forever out there on the internet in these dark net worlds of child sex predators,” Baehr-Jones said. “The harm from that is acute.”

The attorney said she hopes the lawsuit brings “accountability” for xAI and that “most importantly, this should never happen to any other victim.”

The lawsuit seeks unspecified damages and accuses xAI of production with the intent to distribute child pornography, distribution of child pornography, and possession of child pornography, among other claims.

“In our legal system, money is the way we make corporations pay for the harms that they have caused,” Annika Martin, another lawyer for the plaintiffs, said. “Because corporations are profit-seeking entities, hitting them in the wallet is the only way to influence their decision-making.”

Earlier this year, xAI’s Grok sparked massive backlash after the AI image generator was used to make nonconsensual sexualized images of real people.

In response, X, the social media site that Musk sold to xAI in March, said Grok would no longer be able to generate AI images of real people in sexualized or revealing clothing.

Ashley St. Clair, who gave birth to one of Elon Musk’s sons in 2024, sued xAI in January, alleging that Grok generated sexually explicit deepfakes of her at users’ request.

French authorities are also investigating Grok over sexualized deepfakes.




Source link

Lucia Moses

Leaked deck shows Elon Musk’s X is promoting Grok’s brand-safety scores after sexualized images backlash

Elon Musk’s X is promoting itself to potential advertisers with a new deck that underlines its commitment to brand safety, according to the leaked deck shared with Business Insider.

It comes after the AI chatbot shared “deepfake” sexualized images of women and children — a practice it stopped in late January after a backlash. The company said it would no longer generate AI images of real people in sexualized clothing.

The deck shows X is also promoting its use of “blocklists.” A blocklist is a list of sites or accounts that advertisers explicitly prevent their ads from appearing on. In the past, Musk’s X has taken legal action against advertisers who have used such tools to safeguard their ad placements.


X brand safety deck shows it uses Grok for brand safety

X touts its use of Grok to make the platform safe for advertisers.

X



In the deck, X said it had achieved a nearly 100% perfect “brand safe” or suitable scores using Grok, as measured by tech companies IAS and DoubleVerify.

It mentions ways it uses Grok to review posts and users’ profiles for brand suitability. For instance, if a user regularly posts about sensitive topics, the system can block ads from appearing alongside that user. X said it can target up to 4,000 keywords and 2,000 author handles this way.

The deck also promotes X as a place for brands to manage crises in real time.

X didn’t comment on the deck when reached by Business Insider.


X promotes use of blocklists in brand safety deck

X says its blocklists stop ads from appearing on up to 50 specific publishers per ad group.

X



The deck was shared at an event for clients and agencies on February 26. The 2026 Brand Suitability Webinar was billed as “empowering brands with new tools for safety & reach on X.”

It’s unclear if X’s newest charm offense will sway advertisers.

X is one of the smallest social media platforms by ad spending, with EMARKETER estimating it has less than 1% of worldwide digital ad revenue. It has an outsized influence because of its use by public figures and as a news channel.

Since Elon Musk bought X, formerly known as Twitter, in 2022, its relationship with advertisers has been fraught, with Musk publicly criticizing advertisers that cut or limited advertising on the platform.


X brand deck shows how it uses Grok and blocklists to assure brand safety

The deck details what X says it’s done to be brand-safe.

X



Advertisers left en masse after Musk’s acquisition. EMARKETER estimated its revenue would reach $2.2 billion in 2026, below its pre-acquisition level of $4.5 billion.

In 2023, Musk lashed out at advertisers, using an expletive on stage at an event directed toward those who had left.

And X is suing an advertiser trade group, alleging that its members conspired to boycott the platform in contravention of antitrust laws. The group denied it violated antitrust laws. The case is pending, with the last filing occurring on February 19.

X has also been criticized for loosening moderation and account-verification rules and for reinstating some banned accounts of provocative figures.




Source link

The-EUs-privacy-watchdog-is-investigating-X-over-sexualized-AI.jpeg

The EU’s privacy watchdog is investigating X over sexualized AI images

X is facing mounting criticism from foreign watchdogs over its generative AI chatbot, Grok.

The Irish Data Protection Commission said Tuesday that it had opened an inquiry into Elon Musk’s X, formerly known as Twitter.

The commission said in a press release that the inquiry was linked to the creation and publication of non-consensual, sexualized images of European Union residents on X using Grok’s generative AI functions. This included pictures of children.

The commission, which is responsible for enforcing the EU’s General Data Protection Regulation, said in the release that it notified X of the investigation on Monday.

X did not respond to a request for comment from Business Insider.

Grok is a chatbot developed by Musk’s xAI, now a subsidiary of his aerospace company SpaceX.

The commission’s investigation follows several weeks of controversy around Grok and X. The platform came under fire worldwide in January after reports emerged of Grok users generating sexualized images of real people, including minors.

Countries like Indonesia, Malaysia, and the Philippines temporarily suspended access to Grok. The European Commission launched an investigation into Grok, while India’s information technology ministry voiced its opposition via a letter to the chief compliance officer of X’s India operations.

California’s Attorney General, Rob Bonta, also said in early January that he had launched a probe into Grok’s AI deepfakes.

In response, X made Grok’s AI image generation tool a premium feature limited to paying subscribers and later stopped it from generating sexualized images altogether. However, a Business Insider report found that it was still possible to trigger these images in Grok’s web and mobile applications.

In response to backlash over Grok, Musk said in an X post on January 3, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”




Source link

xAI

California sends xAI cease-and-desist letter, saying it must stop allowing sexualized deepfake images of minors


Anadolu/Anadolu Agency via Getty Images

  • California sent xAI a cease-and-desist letter, demanding it stop allowing deefake images of minors.
  • Elon Musk’s xAI faces sustained criticism over Grok’s ability to create nonconsensual sexualized images.
  • The letter, sent by AG Rob Bonta, threatens legal action if the deepfakes continue to be permitted.

California Attorney General Rob Bonta has demanded that xAI prevent its chatbot, Grok, from continuing to create sexualized deepfake images of children.

Bonta’s office sent a cease-and-desist letter to Elon Musk’s AI startup on Friday after sustained criticism over the bot’s ability to create nonconsensual sexualized images, including those of minors.

Earlier this week, X said that it implemented restrictions on Grok.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X’s safety account said in a blog post on the platform on Wednesday. “This restriction applies to all users, including paid subscribers.”

However, that didn’t stop the X or Grok app from creating sexualized images, Business Insider’s Henry Chandonnet found on Thursday.

Representatives for the CA Attorney General’s office did not immediately respond to requests for comment from Business Insider.

In an automated response to Business Insider, xAI said, “Legacy Media Lies.”

This is a developing story. Please check back for updates.




Source link

Shuby headshot

Grok stops users from making sexualized AI images after global backlash

Grok will no longer be allowed to create AI photos of real people in sexualized or revealing clothing, after widespread global backlash.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X’s safety account said in a blog post on the platform on Wednesday. “This restriction applies to all users, including paid subscribers.”

The change was announced hours after California’s top prosecutor, Rob Bonta, said he launched an investigation into sexualized AI deepfakes, including those of children, generated by Grok. Bonta said that there had been a flood of reports in the last few weeks that Grok users were taking pictures of women and minors they found online and using the AI model to undress them in images.

Indonesia and Malaysia suspended Grok because of the images, the first countries in the world to ban the AI tool. Lawmakers in the UK publicly considered a suspension.

In Wednesday’s blog post, the social media company reiterated that image creation and the ability to edit images via Grok on the X platform will now only be available to paid users as an additional safety measure.

The company restricted non-paying users last week after complaints from officials globally, but it was slammed for being insufficient.

A spokesperson for British Prime Minister Keir Starmer said it “simply turns an AI feature that allows the creation of unlawful images into a premium service.”

Elon Musk, who owns xAI, the maker of Grok, said that the UK government wanted “any excuse for censorship” in response to a post questioning why AI tools like Gemini and ChatGPT were not being looked into.

On Wednesday, a few hours before X’s official account posted about the ban on creating sexualized images, Musk asked users to try to get around the AI model’s image restrictions.

Bonta’s office and Starmer’s office did not immediately respond to requests for comment from Business Insider.




Source link