5-biggest-takeaways-from-Nvidias-Q4-earnings-—-from-the.jpeg

5 biggest takeaways from Nvidia’s Q4 earnings — from the new Vera Rubin chips to an emerging risk

Nvidia moved quickly to calm investor nerves during its earnings call on Wednesday.

The chipmaker delivered another blowout earnings report that underscored how little momentum the AI boom has lost. As the world’s most valuable company by market capitalization, Nvidia topped Wall Street expectations across the board in its fiscal fourth quarter and issued a forecast that sailed past analyst estimates.

The upbeat results arrive at a delicate moment for AI-linked stocks, which have recently shown signs of fatigue.

From incorporating Groq into Nvidia systems to an update on the new Vera Rubin chips, here are the biggest takeaways from Nvidia’s fourth-quarter earnings call.

1. Nvidia is becoming the backbone of Big AI

Over the course of the call, CEO Jensen Huang repeatedly positioned Nvidia at the center of the AI industry’s biggest players.

OpenAI’s latest Codex model is trained and runs on Nvidia’s Blackwell systems, and the companies are close to reaching a multibillion-dollar partnership, he said.

Meta is deploying Nvidia GPUs in its push toward superintelligence, and Nvidia also announced an up to $10 billion investment in Anthropic.

Huang said his goal is to ensure that every form of AI — from large language models to robotics — is built on its platform.

“We want to take the great opportunity that we have as we’re in the beginning of this new computing era, this new computing platform shift, to put everybody on Nvidia,” he said.

2. Huang teases Groq integration as AI shifts to inference

When asked about Nvidia’s future road map and whether it plans to build customized chips for specific workloads, Huang said the company prefers to keep as much as possible within a single design.

That said, he teased a potentially significant move involving Groq, saying more details would come at Nvidia’s GTC conference in March.

Late last year, Nvidia struck a non-exclusive licensing agreement with Groq for its low-latency AI inference technology — a deal that also brought Groq’s founder and other top engineers on board.

“What we’ll do is we’ll extend our architecture with Groq as an accelerator in very much the ways that we extended Nvidia’s architecture with Mellanox,” Huang said, referring to the networking company Nvidia acquired in 2020.

As AI workloads shift from training large models to running them, the move suggests Nvidia isn’t going to abandon its core platform but rather fold specialized inference capabilities in.

3. Samples of the Vera Rubin chips have been shipped

Nvidia has begun shipping early samples of its next-generation Vera Rubin chips to customers.

Chief Financial Officer Colette Kress said during the earnings call that the company delivered “our first Vera Rubin samples” earlier this week and expects broader shipments of the new chips to begin in the second half of 2026.

“We expect every cloud model builder to deploy Vera Rubin,” Kress said.

Huang previously said at the Consumer Electronics Show in January that compared to the Blackwell model, Rubin has more than triple the speed, could run inference five times faster, and can deliver significantly more inference compute per watt of energy.

4. Addressing future risks

Nvidia appears concerned about whether there will be enough resources to sustain the demand for data centers.

In its latest 10-K report filed with the Securities and Exchange Commission, Nvidia listed the availability of data centers, energy, and capital to support the data center buildout as a risk factor, writing that “any shortage of these and other necessary resources could impact our future revenue and financial performance.”

“Expanding energy capacity to meet demand is a complex, multi-year process involving significant regulatory, technical, and construction challenges,” wrote Nvidia.

“In addition, access to capital can be particularly constrained for less-capitalized companies, which may face difficulties securing financing for large-scale infrastructure projects,” Nvidia added.

5. An OpenAI deal may finally be ‘close’

Huang addressed the company’s growing slate of strategic investments, including a deal with OpenAI, as questions mount over whether Nvidia’s strategy creates circular relationships with its own customers.

Speaking about Nvidia’s investments in AI companies such as Anthropic and OpenAI, Huang said the strategy is centered on strengthening the broader AI ecosystem and ensuring the next generation of software and hardware is built on Nvidia’s platform, from large language models to robotics.

“We want to take the great opportunity that we have, as we’re in the beginning of this new computing era,” Huang said.

Huang confirmed that Nvidia is “close” to finalizing a deal with OpenAI. The partnership was first outlined in 2025 as part of a massive AI infrastructure initiative that could reach $100 billion.




Source link

Here-are-the-biggest-announcements-coming-out-of-the-2026.jpeg

Here are the biggest announcements coming out of the 2026 Consumer Electronics Show, starting with Nvidia’s Vera Rubin chips

On Monday, ahead of the Consumer Electronics Show, Huang officially introduced the Vera Rubin architecture, which is now in production and expected to ramp up in volume in the second half of the year. This move follows a blockbuster year for its Blackwell chip, as demand for AI infrastructure continued to surge.

In a press briefing ahead of Huang’s keynote, Dion Harris, Nvidia’s senior director of HPC and AI infrastructure solutions, described Vera Rubin as “six chips that make one AI supercomputer.”

“Vera Rubin is designed to address this fundamental challenge that we have: The amount of computation necessary for AI is skyrocketing,” Huang told the audience during a presentation at the CES.

Huang added that compared to the Blackwell model, Rubin marks a leap in performance, with more than triple the speed, could run inference five times faster, and can deliver significantly more inference compute per watt of energy.

Rubin was first announced in 2024 and has been slated to replace Blackwell ever since. The early debut comes months ahead of the late-2026 timeline Nvidia had previously projected.

Named after astronomer Vera Rubin, who discovered the existence of dark matter, Nvidia said in a press release that the architecture is designed to support more complex, agent-style AI workloads, as well as more networking and data movement.

The Rubin systems are already lined up for deployment across much of the cloud industry. Nvidia said partners, including Amazon Web Services, OpenAI, Anthropic, alongside the upcoming Doudna system at Lawrence Berkeley National Laboratory, all plan to use the new platform.

The accelerated launch comes shortly after Nvidia reported record data center revenue, up 66% from a year earlier, driven largely by demand for Blackwell and Blackwell Ultra GPUs. Those chips have become a benchmark for the current AI boom are widely seen as a test of whether spending on AI infrastructure is sustainable.

Huang has previously estimated that between $3 trillion and $4 trillion could be spent globally on AI infrastructure over the next five years. Nvidia said products and services built on the Rubin platform will begin rolling out from partners in the second half of 2026.




Source link

A-25B-credit-investor-says-betting-only-on-AI-chips.jpeg

A $25B credit investor says betting only on AI chips misses the bigger cycle

The artificial intelligence boom is real — but there’s more to the AI trade than chips alone, according to a credit investor.

“This is a super-duper micro cycle that will outlast many investing careers,” said Scott Goodwin, the cofounder and managing partner of Diameter Capital Partners, a quote he attributed to his partner Jonathan Lewinsohn.

AI represents what Diameter Capital sees as a long-running, disruptive cycle — but buying the most obvious winners isn’t the only way to play it, he said on the “Goldman Sachs Exchanges” podcast published on Friday.

Diameter Capital, which manages approximately $25 billion in assets, has focused on where AI demand could create less obvious bottlenecks — and where those bottlenecks show up in credit markets.

The AI opportunity beyond chips

That view led Diameter to buy the unsecured debt of a midsize telecommunications company in 2023.

Goodwin said the bet was rooted in the idea that as companies move from training AI models to actually using them, demand shifts away from chips alone and toward the networks that carry data.

“It had to leave the data center. How would it leave? It would leave on the commercial fiber, the pipes,” he said.

The telco went on to sign more than $10 billion in contracts with hyperscale cloud providers, and the debt has rebounded to face value, Goodwin said.

Diameter Capital also made “a big bet” on a satellite company tied to the wireless spectrum — a wager that later paid off after the company sold spectrum assets and the debt returned to face value.

Goodwin’s comments come amid growing debate over whether sky-high AI valuations are sustainable and whether investors are overlooking other opportunities tied to the technology.

Risks and rewards

Goodwin warned that parts of the AI-credit boom may be taking on risk that’s hard to price, especially in chip finance.

Some investors, he said, are taking on “residual risk,” or the riskiest slice of chip-financing deals — betting on what the hardware might be worth years from now. Cutting-edge firms refresh their technology often, so chips can quickly become outdated for some customers.

“We call up really smart people in Silicon Valley, we call up really smart people at Big Tech companies and ask them what the residual value is on these chips three, four, five, six, seven years forward,” he said. “None of them have a clue.”

Goodwin said the next phase isn’t just about spending on infrastructure — it’s about competitive disruption rather than capital expenditure.

“Who are the companies, who are the entities that are going to adopt AI and take a step forward versus their peers? And who are going to be the losers?” he asked.

“That is actually a longer cycle than the capex cycle, so that’s really interesting,” he said.




Source link

Chinas-economic-slump-isnt-stopping-a-billionaire-boom-in-AI.jpeg

China’s economic slump isn’t stopping a billionaire boom in AI chips

China’s deepening property crisis has crushed household wealth and dented the fortunes of some of its biggest tycoons — but a new class of AI-era billionaires is rising fast.

This year, the standout winners are coming from China’s red-hot AI chip sector.

On Wednesday, shares of MetaX Integrated Circuits Shanghai — a GPU startup founded by former AMD executives — skyrocketed as much as 755% on their first day of trading on the Shanghai Stock Exchange’s tech-focused STAR Market, before closing up about 700%.

The surge catapulted its chairman and cofounder, Chen Weiliang, into one of China’s fastest-rising tech moguls. Chen’s stake in MetaX is worth about $6.5 billion, according to the Bloomberg Billionaires Index.

Other early insiders also saw eye-popping paper gains.

MetaX’s other two cofounders and co-chief technology officers, Peng Li and Yang Jian, hold stakes worth hundreds of millions of dollars after the blockbuster debut, according to Bloomberg’s calculations.

China’s AI rush

Chen’s rise follows that of another GPU entrepreneur, Zhang Jianzhong, the founder and CEO of Moore Threads Technology.

Earlier this month, Zhang’s net worth jumped to $4.3 billion after his company’s successful IPO, continuing a wave of investor enthusiasm for homegrown semiconductor players.

The richest figure in China’s AI chip scene is Chen Tianshi, a cofounder and CEO of Cambricon Technologies — a company retail traders have dubbed “China’s Nvidia.”

Cambricon’s Chen is now worth $22.5 billion, making him the country’s 16th-richest individual on Bloomberg’s list. He is the 115th richest person in the world.

These new fortunes reflect a sharp shift in investor sentiment.

Chinese AI and semiconductor stocks have been on a tear since the breakout of the China-made DeepSeek-R1 AI model released in January. The model helped spark a rally in local tech names and pushed the Hang Seng Tech Index up more than 20% so far this year.

Washington’s tightening export controls on advanced Nvidia chips also contributed to the boom.

Such restrictions on high-end AI processors have choked China’s access to cutting-edge US hardware and pushed Beijing to lean harder on domestic suppliers.

Still, China’s new AI billionaires remain far from the top of the country’s wealth rankings. The upper tier is still dominated by long-established tycoons.

In the top spot is Zhong Shanshan, the low-key bottled-water magnate behind Nongfu Spring, with a fortune of $68.1 billion, per Bloomberg.

Pony Ma, a cofounder and CEO of Tencent, ranks second with $66.5 billion — a fortune up 38% this year, on the heels of Tencent’s AI-induced rally — while ByteDance cofounder Zhang Yiming comes in third with $65.2 billion.




Source link

Trump-gives-Nvidia-the-green-light-to-sell-its-H200.jpeg

Trump gives Nvidia the green light to sell its H200 chips in China

Nvidia just scored a win from President Donald Trump.

In a post on Truth Social on Monday, Trump said he told Chinese leader Xi Jinping that the US would allow Nvidia to sell its H200 chips to “approved customers” in China.

“This policy will support American Jobs, strengthen US Manufacturing, and benefit American Taxpayers,” Trump said in the post.

Trump said, “$25% will be paid to the United States of America.” He has previously proposed having the US take a cut of chip sales to China.

Nvidia’s stock was up in after-hours trading following Trump’s announcement.

“We applaud President Trump’s decision to allow America’s chip industry to compete to support high paying jobs and manufacturing in America,” a spokesperson for Nvidia said in a statement to Business Insider. “Offering H200 to approved commercial customers, vetted by the Department of Commerce, strikes a thoughtful balance that is great for America.”

Nvidia’s powerful H200 chips have been in high demand as AI models become more powerful.

While Nvidia was already able to sell some of its other chips to China, the US government has limited its ability to sell some powerful chips due to national security concerns. Sales of its H20 chips to China during Q3 were “insignificant,” CFO Colette Kress said on its latest earnings call.

“While we were disappointed in the current state that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the US and China governments and will continue to advocate for America’s ability to compete around the world,” Kress said during the Q3 earnings call.

Nvidia CEO Jensen Huang met with Trump last week to discuss export controls on chips.

“I’ve said repeatedly that we support export control, that we should ensure that American companies have the best and the most and first,” Huang told reporters last week.

Nvidia stock was up roughly 2% after hours.




Source link

Some-Tesla-shareholders-say-diverting-Nvidia-chips-is-further-proof.jpeg

Some Tesla shareholders say diverting Nvidia chips is further proof that Elon Musk doesn’t deserve a multibillion-dollar pay package

Several institutional shareholders of Tesla told Business Insider that Elon Musk’s decision to redirect a shipment of valuable Nvidia chips away from the EV company is further proof the CEO doesn’t deserve a multibillion-dollar pay package.

In May, a group of eight Tesla shareholders wrote a letter urging other investors to vote against Musk’s compensation package. The group is just one faction of a growing number of investors who said they plan to vote against the deal.

This package, now roughly worth $46 billion, was struck down in January by Delaware Chancery Court Chancellor Kathaleen McCormick, who said that the process to reach this “unfair price” for Musk was “deeply flawed.”

Tesla shareholders will vote on June 13 on whether to reinstate Musk’s deal.

But less than two weeks ahead of the shareholder vote, CNBC reported that Musk diverted a $500 million shipment of Nvidia chips, which are essential for powering artificial intelligence technology, away from Tesla and to his social media platform X instead.

The internal memo from Nvidia indicating Musk’s delay of the Nvidia chips procurement was from December, CNBC reported — months before the April earnings call in which the Tesla CEO insisted the automaker is an AI company. He also stated in the call that he would aggressively expand the number of Nvidia chips at Tesla from 35,000 to 85,000 units by the end of 2024.

In response to the CNBC report, Musk said on X that “Tesla had no place to send the Nvidia chips to turn them on, so they would have just sat in a warehouse.”

“The south extension of Giga Texas is almost complete. This will house 50k H100s (Nvidia chips) for FSD training,” Musk added, referring to Tesla’s Full Self-Driving feature — a key component of the company’s promise to deliver autonomous taxis.

But some of the shareholders behind the effort to strike down Musk’s big payday are not convinced.

“The diversion of Nvidia’s processors to X and xAI is just another example of Tesla’s CEO reallocating Tesla’s resources in favor of his other businesses and treating Tesla as though it is his own coffer as a result of the lack of oversight by Tesla’s board,” Tejal Patel, the executive director of SOC Investment Group, wrote in an email to BI.

Patel added: “The key questions are why were these valuable processors ‘just sitting there’ in the first place, and if it was an operational issue, why was that not foreseen by management? Whatever decision-making there was for the processors to go unused by Tesla would have been up to CEO Musk.”

Musk did not respond to a request for comment from Business Insider.

SOC Investment Group is one of the eight shareholders that co-signed a letter urging investors to vote against the ratification of Musk’s stock options package and against the reelection of Musk’s brother, Kimbal, and James Murdoch for seats on Tesla’s board.

The group — made up of pension fund managers, an asset management firm, and a bank — also includes Amalgamated Bank, AkademikerPension, Nordea Asset Management, New York City Comptroller Brad Lander, SHARE, Unison, and United Church Funds.

In a statement to BI, Lander wrote that Musk’s decision to divert Nvidia chips away from Tesla “should be a “red flag to investors.”

“This sudden move adds to the growing concerns about Musk’s commitment to Tesla and highlights his glaring conflicts of interest,” he wrote. “There is a pressing need at Tesla for a genuinely independent board that will ensure Musk prioritizes company interests.”

Matthew Illian, the director of responsible investing for United Church Funds, similarly criticized Musk’s move to delay the shipment of Nvidia chips, stating that it was “further evidence” that the pay package “never achieved its purpose of maintaining the attention of Tesla’s CEO.”

“This is all about Elon building an empire for himself with investor money and we can’t let this happen,” he wrote in an email to BI.

It’s not immediately clear how much Tesla stock the eight shareholders own altogether.

Five of the eight shareholders, including Amalgamated Bank, Unison, Nordea, the New York City Retirement System, and United Church Funds, represent more than 4.9 million shares of Tesla stock.

As of Thursday, those shares are worth more than $878 million.

Spokespersons for SHARE, Nordea, and Unison could not be reached for comment or did not immediately respond for comment.

In addition to the eight shareholders, the California Public Employees’ Retirement System (CalPERS), which owns about 9.5 million shares of Tesla stock, signaled it would vote against Musk’s pay package.

“We do not believe that the compensation is commensurate with the performance of the company,” CalPERS CEO Marcie Frost told CNBC.

A CalPERS spokesperson declined to comment when asked about Musk’s decision to divert the shipment of Nvidia chips.


Source link