Polymarket-takes-down-nuclear-detonation-bet-after-online-backlash.jpeg

Polymarket takes down nuclear detonation bet after online backlash

  • Polymarket was allowing traders to wager on whether a nuclear weapon would detonate this year.
  • That stirred backlash online, particularly after suspicious trading in the wake of the Iran strikes.
  • The market has now been taken down.

If you were looking to make money on nuclear detonations, you now have one less avenue to pursue.

Overnight, Polymarket took down a market that allowed users to trade on where a nuclear weapon would detonate by March 31, June 30, or simply before 2027.

Traders who bet yes on any of those timelines would be paid out if there were a nuclear detonation anywhere on Earth, including in an offensive use, a test, or even an accidental detonation.

The market had over $650,000 in total trading volume as of Tuesday, according to an archived snapshot of the site. A message on the webpage now reads: “This event has been archived.”

It’s not yet clear why Polymarket took down the site, or whether users who put money into the market will get refunds. An earlier version of the market, which covered 2025, resolved without incident last year.

A spokesperson for Polymarket did not respond to a request for comment.

The suspension came after several users on X expressed outrage about the existence of the market, particularly amid a raft of suspicious trades on the platform in the wake of the killing of Iranian Supreme Leader Ali Khamenei.

This isn’t the first time Polymarket has come under public scrutiny for hosting markets related to armed conflict.

After an anonymous Polymarket trader made over $400,000 on a suspiciously well-timed bet on Venezuelan President Nicolás Maduro’s political future, a lawmaker introduced a bill to ban prediction market insider trading by government officials.




Source link

Sam-Altman-says-OpenAI-will-tweak-its-Pentagon-deal-after.jpeg

Sam Altman says OpenAI will tweak its Pentagon deal after surveillance backlash

OpenAI said it is amending its contract with the Pentagon.

After public concerns that OpenAI’s new deal with the Pentagon would allow the government to use its AI for mass surveillance, CEO Sam Altman posted an internal memo to X on Monday evening, saying that the company is working with the Pentagon to “make some additions in our agreement.”

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of US persons and nationals,” Altman wrote on X.

“The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract,” Altman added.

Altman’s memo came after OpenAI struck a deal with the Pentagon on Friday to deploy its AI models on classified military networks. The contract stepped into a standoff between the Pentagon and Anthropic and happened just a day before the US struck Iran.

In his note, Altman said that he got things “wrong,” saying the company should not have “rushed” to seal the deal.

“The issues are super complex, and demand clear communication,” he said. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Hours before the OpenAI deal was announced, President Donald Trump ordered federal agencies to halt use of Anthropic’s Claude system, following a breakdown in talks over the military use of AI. Anthropic had specific red lines: explicit contractual bans on mass domestic surveillance and fully autonomous weapons, which are systems capable of killing without human oversight.

As of Friday, nearly 500 OpenAI and Google employees signed on to an open letter in support of Anthropic’s decision.

The OpenAI deal soon triggered backlash and concerns that OpenAI’s tools would be used for domestic surveillance or for lethal autonomous weapons, claims which Altman immediately disputed. Protests took place in front of the OpenAI office in San Francisco and London, and QuitGPT, an advocacy group against OpenAI, has launched a boycott and organized a protest scheduled for Tuesday.

Anthropic did not immediately respond to a request for comments.




Source link

Lucia Moses

Leaked deck shows Elon Musk’s X is promoting Grok’s brand-safety scores after sexualized images backlash

Elon Musk’s X is promoting itself to potential advertisers with a new deck that underlines its commitment to brand safety, according to the leaked deck shared with Business Insider.

It comes after the AI chatbot shared “deepfake” sexualized images of women and children — a practice it stopped in late January after a backlash. The company said it would no longer generate AI images of real people in sexualized clothing.

The deck shows X is also promoting its use of “blocklists.” A blocklist is a list of sites or accounts that advertisers explicitly prevent their ads from appearing on. In the past, Musk’s X has taken legal action against advertisers who have used such tools to safeguard their ad placements.


X brand safety deck shows it uses Grok for brand safety

X touts its use of Grok to make the platform safe for advertisers.

X



In the deck, X said it had achieved a nearly 100% perfect “brand safe” or suitable scores using Grok, as measured by tech companies IAS and DoubleVerify.

It mentions ways it uses Grok to review posts and users’ profiles for brand suitability. For instance, if a user regularly posts about sensitive topics, the system can block ads from appearing alongside that user. X said it can target up to 4,000 keywords and 2,000 author handles this way.

The deck also promotes X as a place for brands to manage crises in real time.

X didn’t comment on the deck when reached by Business Insider.


X promotes use of blocklists in brand safety deck

X says its blocklists stop ads from appearing on up to 50 specific publishers per ad group.

X



The deck was shared at an event for clients and agencies on February 26. The 2026 Brand Suitability Webinar was billed as “empowering brands with new tools for safety & reach on X.”

It’s unclear if X’s newest charm offense will sway advertisers.

X is one of the smallest social media platforms by ad spending, with EMARKETER estimating it has less than 1% of worldwide digital ad revenue. It has an outsized influence because of its use by public figures and as a news channel.

Since Elon Musk bought X, formerly known as Twitter, in 2022, its relationship with advertisers has been fraught, with Musk publicly criticizing advertisers that cut or limited advertising on the platform.


X brand deck shows how it uses Grok and blocklists to assure brand safety

The deck details what X says it’s done to be brand-safe.

X



Advertisers left en masse after Musk’s acquisition. EMARKETER estimated its revenue would reach $2.2 billion in 2026, below its pre-acquisition level of $4.5 billion.

In 2023, Musk lashed out at advertisers, using an expletive on stage at an event directed toward those who had left.

And X is suing an advertiser trade group, alleging that its members conspired to boycott the platform in contravention of antitrust laws. The group denied it violated antitrust laws. The case is pending, with the last filing occurring on February 19.

X has also been criticized for loosening moderation and account-verification rules and for reinstating some banned accounts of provocative figures.




Source link

Katherine Tangalakis-Lippert's face on a white background

An AI replica of deceased Dilbert cartoonist Scott Adams sparks backlash from his family

Scott Adams once sounded open to the idea of a digital afterlife. Now that he’s passed, social media posts attributed to his family say an AI version of the “Dilbert” creator circulating online is unauthorized — and deeply distressing.

In a 2021 podcast clip, the cartoonist said he granted “explicit permission” for anyone to make a posthumous AI based on him, arguing that his public thoughts and words are “so pervasive on the internet” that he’d be “a good candidate to turn into AI.” He added that he was OK with an AI version of him saying new things after he died, as long as they seemed compatible with what he might say while alive.

Shortly after the 68-year-old’s January death from complications of metastatic prostate cancer, an AI-generated “Scott Adams” account began posting videos of a digital version of the cartoonist speaking directly to viewers about current events and philosophy, mirroring the cadence and topics the actual human Adams discussed for years.

His family says it’s a violation, not a tribute.

A February 5 post on Adams’ official account attributed to his brother, Dave Adams, insisted the cartoonist “never intended, never would have approved an AI version of him that wasn’t authorized by himself or his estate.”

“The real Scott Adams gave explicit permission on the record multiple times for people to create and operate an AI version of him,” the AI Adams said in a post on February 5. “So this iteration exists as a direct fulfillment of that stated wish.”

The official Adams account reiterated the family’s objection on February 17, saying the estate was “kindly but firmly” asking anyone using AI to recreate his voice or likeness to stop, calling the digital replicas a “fabricated version” of Adams that is “deeply distressing.”

“This is not a tribute. It is not an honor. It is an unauthorized use of identity,” the post read.

The Adams estate did not respond to requests for comment from Business Insider. In a Friday interview, the creator of the AI Adams said he’d tried to get in touch with the estate to collaborate on the project, but had been blocked on social media.

“It’s my belief this is something that he wanted,” John Arrow, an AI venture capitalist who created the digital Adams, said. “And I’m not trying to predict what he was thinking. I’m just going by his statements what he what he tweeted over and over and over again. I’ve looked and looked and can find no evidence of any type of revocation. If there was anything that suggested this is what he didn’t want, I would stop.”

The dispute underscores the growing legal and ethical fault lines around “AI afterlives” — and how quickly technology can outpace the rules meant to govern it.

‘It’s a deepfake’

Karen North, a University of Southern California professor specializing in digital social media and psychology, said calling the AI-generated Adams an avatar, as some have online, softens what it is.

“It’s a deepfake,” North told Business Insider.

The troubling part, she said, is how a realistic imitation can surface while a family is grieving and potentially say things the real person never would have said. North added that since many Americans are “giving up so much information” through apps that capture faces and voices and viral quizzes that collect personal details, it is increasingly easy to recreate someone without permission.

“I find it very disturbing,” she said.

Betsy Rosenblatt, an intellectual property lawyer and professor at Case Western Reserve University, said her initial reaction was that the AI Adams would be “unethical in the extreme” unless authorized by Adams himself or his estate after his passing.

“The temporariness of people is part of what makes life special,” she said.

Legally, she said, the central issue is the right of publicity — protections over a person’s name, image, and likeness. Still, those laws are more focused on privacy and economics than on grief.

The right of publicity is “chiefly concerned with economic remedies,” Rosenblatt said.

The strongest claims typically involve money: an AI version could harm existing deals tied to Adams’ identity or block the family from striking their own.

Rosenblatt described two potential economic harms: “One is that it could be harming some financial arrangement that they already have. Another is that it might stand in the way of their making some competitive financial arrangement,” she said.

The legal analysis also hinges on whether the account is commercial. Courts often ask whether the speech proposes a commercial transaction.

If the digital replica isn’t selling anything, Rosenblatt said, it becomes “more likely to be considered a First Amendment protected expression” for the anonymous creator — not a “slam dunk,” but a stronger argument.

The AI Adams identifies itself as artificial intelligence at the start of its clips and does not appear to solicit money. Arrow told Business Insider his plan isn’t to monetize the project or sell products through the AI Adams — in fact, it costs Arrow’s business, Age of AI, about $1,000 to produce each episode — but to ensure the world doesn’t lose “another great intellect.”

“When we have a great mind, and he or she passes away, we lose that person forever,” Arrow said. “AI is giving us a chance to maybe not make them immortal, but at least preserve a lot of their teachings and allow them to adapt and give their insight on new situations.”

Consent isn’t the same as a contract

The estate’s objections sit uneasily alongside Adams’ 2021 comments offering “explicit permission” for AI versions of him. Arrow said the primary reason he chose to recreate Adams, rather than other public or historical figures, was because it seemed clear that he was willing to be immortalized by AI.

North said offhand remarks about technology shouldn’t automatically be treated as binding authorization. Adams was “an incredibly bright, incredibly creative person” who often pushed boundaries, she said, and comments made in conversation “may not be legally binding in ways contracts and intellectual property rights are legally binding.”

“Let this be a warning to all of us: be careful what you say, because he’s now put his loved ones in a difficult position as they protect his legacy,” North said.

Rosenblatt said Adams’ wishes “would certainly matter in an ethical sense,” but may not matter legally “unless he gave somebody the legal rights to do that.”

There is no comprehensive federal law governing posthumous AI likeness, but some states — like New York and California — have recently enacted laws requiring consent from heirs or estate executors before creating digital replicas.

Beyond legal questions lies a deeper ethical one: who controls a person’s persona after they’re gone?

North said people “should own the rights to our own personas,” and when they die, those rights “should go to our loved ones,” not become a free-for-all. AI replicas, she warned, can drift off-brand or reshape public memory.

“Shakespeare should always sound like Shakespeare,” she said. “Dr. Seuss should always sound like Dr. Seuss.”

For now, the AI “Scott Adams” fight is one family’s public line-drawing exercise. It may also be a preview of a broader reckoning in a world where convincing digital imitations are easy to make — and where the law is still struggling to answer who gets to decide whether the dead keep talking online.




Source link

Shuby headshot

Grok stops users from making sexualized AI images after global backlash

Grok will no longer be allowed to create AI photos of real people in sexualized or revealing clothing, after widespread global backlash.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X’s safety account said in a blog post on the platform on Wednesday. “This restriction applies to all users, including paid subscribers.”

The change was announced hours after California’s top prosecutor, Rob Bonta, said he launched an investigation into sexualized AI deepfakes, including those of children, generated by Grok. Bonta said that there had been a flood of reports in the last few weeks that Grok users were taking pictures of women and minors they found online and using the AI model to undress them in images.

Indonesia and Malaysia suspended Grok because of the images, the first countries in the world to ban the AI tool. Lawmakers in the UK publicly considered a suspension.

In Wednesday’s blog post, the social media company reiterated that image creation and the ability to edit images via Grok on the X platform will now only be available to paid users as an additional safety measure.

The company restricted non-paying users last week after complaints from officials globally, but it was slammed for being insufficient.

A spokesperson for British Prime Minister Keir Starmer said it “simply turns an AI feature that allows the creation of unlawful images into a premium service.”

Elon Musk, who owns xAI, the maker of Grok, said that the UK government wanted “any excuse for censorship” in response to a post questioning why AI tools like Gemini and ChatGPT were not being looked into.

On Wednesday, a few hours before X’s official account posted about the ban on creating sexualized images, Musk asked users to try to get around the AI model’s image restrictions.

Bonta’s office and Starmer’s office did not immediately respond to requests for comment from Business Insider.




Source link

Internal-emails-show-what-happens-inside-Nvidia-when-its-products.jpeg

Internal emails show what happens inside Nvidia when its products face backlash

An internal Nvidia email chain revealed how senior executives at the chip giant — including founder and CEO Jensen Huang — mobilized in response to customer criticism of a key product launch late last year.

The thread offers a glimpse into how the company responds to public backlash as it expands products designed for individual developers and researchers.

The thread, which Business Insider has seen, centered on the launch of DGX Spark, a desktop AI system designed for developers and researchers to build AI products and work on apps for data science, medicine, and other fields.

While much of Nvidia’s business targets data center customers, Huang underscored Spark’s significance in the thread, calling it the “ultimate developer’s platform — out of the box easy to run all NVIDIA.”

Spark drew criticism soon after its launch, with some citing software stability and performance issues, which garnered coverage in other tech outlets.

An Nvidia spokesperson declined to comment.

Anshel Sag, a Moor Insights & Strategy analyst who has tracked Nvidia launches for 15 years and was an early DGX Spark tester, said the company’s long experience releasing graphics cards in the gaming industry — where products are routinely scrutinized — has made it adept at handling public feedback, with Huang typically keeping a close eye on new releases.

In recent years, the company has become even more reactive, Sag said, due to increased internal resources and “sensitivity about the stock price and how negative sentiment can draw that down.”

Nvidia CEO Jensen Huang steps into the fray

In the fall of last year, AstraZeneca executive director Justin Johnson wrote in a LinkedIn post that while the DGX Spark met performance and speed claims, the software experience was buggy and unstable.

After an Nvidia executive shared Johnson’s post in an internal email thread, Huang entered the fray.

“Jump on x and say you will fix,” he wrote.


A founder's edition of the DGX Spark on display at a Paris tech show last June.

A founder’s edition of the DGX Spark on display at a Paris tech show last June.

Chesnot/Getty Images



Subsequently, an Nvidia engineer replied that the company had reached out to Johnson to resolve most of the issues, which were related to a version mismatch of CUDA, Nvidia’s software that allows developers to build AI apps powered by its GPUs.

Johnson responded that he appreciated the outreach and was exploring setting up DGX Spark at the pharmaceutical company, the chain said.

Nvidia staffers ramp up responses

Following Johnson’s criticism, Nvidia staffers saw other unfavorable responses online and set up a social listening campaign to flag complaints from other influential figures, as well as discussions on Nvidia forums and Reddit, the emails said.

Staffers tracked complaints and engaged directly with key critics who raised concerns about DGX Spark’s performance, heating issues, and pricing.

Another incident involved the researcher Christopher Kouzios, who wrote on LinkedIn that he’d purchased DGX Spark to conduct medical research after his daughter died from a rare brain tumor, with the goal of studying cancer risk in his sons.

Kouzios said software incompatibility had rendered the system unusable and that he’d only received an automated acknowledgment 38 hours after filing a support ticket.

After an Nvidia executive flagged the post, team members said they were fixing the bug, according to the emails. The executive later circulated an updated post in which Kouzios lauded Nvidia’s customer support.

“While the situation initially frustrated me, Nvidia’s response time was exceptional,” Kouzios told Business Insider. “In more than 33 years working with large technology companies, I have never seen an organization respond that quickly to public technical feedback.”

It’s often standard for hardware to ship without fully finished software, Sag said, adding that Nvidia tends to be more “high-touch” than other tech companies in fielding complaints — an approach that flows down from an exceptionally “hands-on” CEO.

Nvidia has previously faced some launch hiccups and early criticism for new products, such as its Blackwell rollout, which encountered manufacturing challenges.

While a CEO’s involvement is notable and Nvidia’s backchannel efforts appeared to placate critics, such an approach isn’t without risks, another analyst said.

“C-suite engagement during product controversies has become more common in tech, particularly for founder-led companies,” said Kate Holterhoff, a senior industry analyst at RedMonk. “It can signal authenticity and accountability, but it also carries reputational risk if the response is perceived as defensive or dismissive.”

Have a tip? Contact this reporter via email at gweiss@businessinsider.com or Signal at @geoffweiss.25. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.




Source link