Natalie Musumeci's face on a gray background

Lawsuit against Elon Musk’s xAI alleges Grok created sexualized deepfakes of 3 minors

A new lawsuit against Elon Musk’s xAI alleges that its flagship chatbot, Grok, was used to create sexualized deepfake images of three minors — content the complaint says amounts to child sexual abuse material.

The proposed class action, filed Monday in a California federal court, accuses the AI startup of profiting from the “sexual predation of real people, including children.”

“Nearly all the companies creating, marketing, and selling AI recognized the dangers of such a tool and chose to enact industry-standard guardrails that would prevent the use of their products by one extremely dangerous group: child sex predators. XAI did not,” the lawsuit says.

Representatives for Musk and xAI did not immediately respond to a request for comment by Business Insider.

Musk previously said in a January post on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

Grok generates images based on user prompts, he wrote.

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” Musk said. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

The lawsuit against xAI says that the Tennessee plaintiffs are “three of the minor victims of xAI’s knowing production, possession, and distribution of AI-generated child sexual abuse material” depicting them.

The plaintiffs, identified in the court papers only as Jane Doe 1, Jane Doe 2, and Jane Doe 3, allege that xAI’s AI tools were used to make nude images and videos of them. Jane Doe 1, a recent adult, was a minor at the time of the alleged incidents, while the two other plaintiffs are still minors, according to the lawsuit.

In December 2025, Jane Doe 1 received a message from an anonymous Instagram account warning her that “pics” of her had been generated by someone she knew and spread across the group-chat platform Discord.

“Through a series of messages, the anonymous user went on to explain that the perpetrator had uploaded a folder of image and video files depicting her and other minor females to Discord,” the lawsuit says.

The anonymous user eventually sent Jane Doe 1 several sexualized AI-generated images and videos of her and other minor girls, according to the lawsuit.

“At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses,” the lawsuit says. “The images showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude.”

Jane Doe 1 alerted the other minors in the images and their families, and a criminal investigation was opened in Tennessee, according to the lawsuit. It added that local police arrested a suspect in connection with the case in December 2025.

Last month, Jane Doe 2 learned through the investigation that at least two of her images had also been used to produce sexually explicit AI-generated content using xAI’s tools, the lawsuit says.

Local law enforcement told the girl and her mother that one image taken of the girl on the beach in a blue bikini had been “morphed to depict her without any clothes,” the lawsuit says.

Authorities also informed Jane Doe 3 that the AI-generated images recovered from the suspect’s phone included one that had been “morphed to depict her fully nude,” according to the lawsuit.

Vanessa Baehr-Jones, an attorney for the plaintiffs, told Business Insider that her clients have endured a “nightmare.”

“Their CSAM images and videos depicting them when they were minors are now forever out there on the internet in these dark net worlds of child sex predators,” Baehr-Jones said. “The harm from that is acute.”

The attorney said she hopes the lawsuit brings “accountability” for xAI and that “most importantly, this should never happen to any other victim.”

The lawsuit seeks unspecified damages and accuses xAI of production with the intent to distribute child pornography, distribution of child pornography, and possession of child pornography, among other claims.

“In our legal system, money is the way we make corporations pay for the harms that they have caused,” Annika Martin, another lawyer for the plaintiffs, said. “Because corporations are profit-seeking entities, hitting them in the wallet is the only way to influence their decision-making.”

Earlier this year, xAI’s Grok sparked massive backlash after the AI image generator was used to make nonconsensual sexualized images of real people.

In response, X, the social media site that Musk sold to xAI in March, said Grok would no longer be able to generate AI images of real people in sexualized or revealing clothing.

Ashley St. Clair, who gave birth to one of Elon Musk’s sons in 2024, sued xAI in January, alleging that Grok generated sexually explicit deepfakes of her at users’ request.

French authorities are also investigating Grok over sexualized deepfakes.




Source link

I-spent-26-hours-in-Qatar-Airways-business-class-Not.jpeg

I spent 26 hours in Qatar Airways’ business class. Not all seats are created equally, but I get why it’s so beloved.

We flew from London Gatwick Airport rather than Heathrow because the flights were about $1,000 cheaper.

Arriving at Gatwick, it was a luxury to have a dedicated check-in counter with barely any queue and then fast-track through security too.

After that, though, I wasn’t super blown away.

Qatar Airways doesn’t have a dedicated lounge at Gatwick, so its business-class travelers can use the Plaza Premium Lounge, which anyone can pay to use. I found it to be quite busy and a bit underwhelming, with a rather uninspiring view.

However, it has a separate area for Qatar Airways customers where we could order from a small à la carte menu. I got a burger, and my husband had a goat-cheese sandwich — it was nice to have complimentary food.

We were also given “premium” drinks vouchers for certain beverages, such as prosecco, though Champagne would’ve cost extra.




Source link

Erica Sweeney

4 new jobs that AI has created in HR and people management

More human resources teams are using artificial intelligence for a variety of functions. Amazon and Siemens, for example, use AI for HR to analyze résumés and make job recommendations based on an applicant’s skills.

Indeed, 31% of organizations this year report using some type of AI technology, according to a 2025 survey of nearly 10,000 HR professionals by Sapient Insights Group.

Many companies are also creating new HR job titles that require AI skills, such as data literacy, analytics, large-language model prompt engineering, and workflow redesign.

Moreover, in 2026, many organizations are willing to offer higher salaries for AI-related skills, including data science, data analytics, and business intelligence, according to a Robert Half report.

“Historically, technological shifts have reshaped some jobs and the way we work, but they’ve also opened doors to new roles and skills,” said Christina Giglio, technology hiring and consulting expert at Robert Half. “AI seems to be continuing that trend.”

Here are four new HR job titles that are appearing in the AI age, according to experts.

1. AI adoption and employee experience lead

This role coordinates the adoption of AI tools, helping people understand the technology’s value, how to use it, and how it benefits them, ensuring that AI rollouts go smoothly.

“AI doesn’t eliminate people,” says Anthony Donnarumma, CEO of the recruiting agency 24 Seven. Companies need individuals to manage the relationship between human and machine work to ensure the technology produces consistent outcomes and meets an organization’s needs, he says.

Humans are needed to oversee how teams adopt AI in their daily work, says Lana Peters, chief revenue and experience officer at Klaar, a performance management software.

The job often includes training managers, redesigning workflows, and connecting company culture and technology while helping employees adapt to the changes.

“Without this role, AI use is at risk of being done in silos or improperly, which is why we’re seeing this position pop up across the job market,” Peters adds.

2. AI trainer or coach

This role trains AI systems, such as chatbots, AI agents, and other tools, to ensure the technology works effectively to produce the desired HR outcome. This might include organizing data and reviewing it for bias.

“Part technical, part editorial, part quality control,” Ronni Zehavi, CEO and co-founder of HR tech platform HiBob, says the individual in this role curates and labels data for AI to use, reviews outputs, and teaches AI systems how to respond to data to meet company goals.

This person “improves AI quality through hands-on review and feedback,” he explains.

3. People data and AI insights lead

Turning “raw people data,” such as from performance reviews and manager check-ins, into insights that leaders can act on is this role’s focus, Peters says.

This individual helps leaders make data-based decisions on their workforce strategy and better understand “how employees are performing, when they are ready to be elevated to a new role, and when they may be a flight risk,” she adds.

Data literacy, analytical thinking, and the ability to interpret AI outputs are crucial skills for this role, says Lauren Winans, CEO and principal human resources consultant at Next Level Benefits.

“Additionally, employers will value soft skills such as ethical awareness, critical thinking, collaboration, and the capacity to translate AI capabilities into strategic decisions, especially in roles that bridge technology, policy, and operations,” Winans says.

4. Responsible AI and people governance manager

Policies and oversight are needed to ensure that AI use is safe, fair, and transparent; this role sets those “guardrails,” Peters says. This individual oversees how employee data is used and ensures there’s no bias that could negatively impact them, she says.

Also referred to as an AI governance and risk lead, the job establishes policies to “keep AI use safe and compliant” and focuses on privacy protection and accuracy monitoring, helping organizations manage regulatory shifts and legal or reputational risks, Donnarumma says.

Essentially, Zehavi says, the role “guides teams on fairness, transparency, and compliance, helping companies use AI in ways that support people rather than unintentionally excluding them.”




Source link