CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. 74% of retail investor accounts lose money when trading CFDs with this provider. You should consider whether you understand how CFDs work and whether you can afford to take the high risk of losing your money.

Nvidia GTC 2026 Full of Highlights. From Chipmaker to AI Operator, How Vera Rubin System Launches the Next Decade?

Source Tradingkey

TradingKey - On March 16 local time, Nvidia ( NVDA )'s annual developer conference, GTC 2026, kicked off at the SAP Center in San Jose. More than 30,000 developers flooded the venue, and Nvidia founder and CEO Jensen Huang's two-and-a-half-hour keynote was undoubtedly the focus of this AI industry event—he not only completely upgraded Nvidia's positioning from a "chip company" to an "AI infrastructure and factory operator," but also issued a stunning prediction that "cumulative revenue from 2025 to 2027 will exceed $1 trillion," sketching an unprecedented growth blueprint for the global AI computing power market.

The Confidence Behind the Trillion-Dollar Demand

At the conference, Nvidia CEO Jensen Huang sent a signal to the market that exceeded expectations—Nvidia's Blackwell and Rubin AI chip architectures are expected to create at least $1 trillion in cumulative order demand by the end of 2027. This forecast is a direct doubling of the $500 billion target announced at the same time last year, instantly igniting market enthusiasm.

Jensen Huang stated: "At this time last year, we saw $500 billion in high-confidence demand covering the Blackwell and Rubin architectures through 2026; however, the explosion in AI computing demand has far exceeded expectations, and the market size by 2027 will reach at least $1 trillion—actual demand might even be higher, and we are prepared for demand outstripping supply."

This is not merely a performance target; it reflects the exponential explosion of AI computing power demand.

Over the past two years, as large models have evolved from the "perception-generation" stage to the "inference-execution" stage, computing power consumption has climbed exponentially.

Jensen Huang pointed out that the core reason Nvidia dared to give such an aggressive forecast lies in the universality and cost advantages of its systems—Nvidia's platform can adapt to AI models in almost all fields, allowing every penny of customer investment to yield a long-term return, which is also a key reason why it has become the world's "lowest-cost AI infrastructure."

60% of Nvidia's revenue comes from the world's top five hyperscale cloud service providers, while the remaining 40% covers diverse scenarios such as sovereign clouds, enterprise applications, industrial AI, robotics, and edge computing. This pattern of "head concentration + long-tail diffusion" shows that AI computing power is transforming from the exclusive demand of internet giants into infrastructure investment for the entire industry.

Analysts believe that Nvidia's trillion-dollar goal is not only a sign of confidence in its own products but also reflects the expansion speed of the global AI infrastructure market.

As large model parameters continue to grow, inference demand surges, and enterprise-grade AI applications accelerate their implementation, AI computing power is becoming a long-term capital expenditure direction alongside cloud computing. This expectation also dispels market concerns about the "AI computing cycle peaking"—current global investment in AI computing is still in its early stages, and tech companies' investment in AI servers, GPUs, and related systems will maintain high-intensity growth in the coming years.

Jensen Huang emphasized in his speech: "The demand curve for AI computing has only just begun to steepen; we are standing at the starting point of a once-in-a-decade technological transformation." Nvidia's trillion-dollar forecast is a clear signal of this impending change.

Asymmetric Complementarity between the Vera Rubin System and Groq

The most significant release of this conference was Vera Rubin, Nvidia's most complex AI computing system to date. Unlike previous releases centered on a single chip, Vera Rubin is a complete supercomputer platform comprising seven chips and five types of racks.

As the computing core of the entire system, the Vera Rubin NVL72 rack achieves a double leap in AI training and inference efficiency. It integrates 72 Rubin GPUs and 36 Vera CPUs, building a unified computing architecture through the next-generation NVLink 6 high-speed interconnect network, reducing data transfer latency between the GPU and CPU to the microsecond level.

When training trillion-parameter Mixture-of-Experts (MoE) models, the NVL72 requires only 1/4 the number of GPUs compared to the previous-generation Blackwell platform, while the generation cost per token is directly reduced to 1/10 of the original—meaning companies will save 90% in computing power investment to train AI models of the same scale, greatly accelerating the commercial rollout of large models.

Additionally, another highlight of this release is the Vera CPU rack, the world's first CPU cluster specifically designed for Agentic AI. Unlike traditional CPUs focused on general-purpose computing, the Vera CPU is deeply optimized for multitasking parallelism and long-context processing capabilities of agents. A single rack can integrate 256 processors, with a computing efficiency twice that of traditional rack-level CPUs, supporting tens of thousands of agents running online simultaneously.

Currently, cloud service providers such as Alibaba, ByteDance, and Cloudflare have announced the deployment of Vera CPU racks to build agent development platforms, marking the beginning of AI computing power's penetration from general model training into more specialized agent scenarios.

If the Vera Rubin NVL72 solves the efficiency problem of "large-scale computing," then the Groq 3 LPX rack fills the technical gap in "ultra-high-speed inference"—this is also the first product launched by Nvidia since its acquisition of Groq last year. The Groq 3 LPU processor features a massive 500MB on-chip SRAM, capable of completing highly complex inference calculations without external video memory, with end-to-end latency an order of magnitude lower than traditional GPUs.

To maximize hardware potential, Nvidia developed the Dynamo intelligent scheduling system, which can dynamically allocate computing power based on the different stages of large model operation. The "pre-fill" stage, which requires massive video memory, is handled by the Rubin GPU, while the latency-sensitive "token decoding" stage is assigned to the Groq LPU.

This "asymmetric collaborative" architectural design increases inference throughput for trillion-parameter models by 35 times at the same power level, directly pushing the boundaries of the economic feasibility of AI inference to new heights.

Meanwhile, Nvidia also officially launched Spectrum X, the world's first mass-produced Co-Packaged Optics (CPO) switch, completely settling the industry debate over the "copper versus fiber" roadmap. Compared to traditional pluggable optics, Spectrum X directly packages optical modules with the switch chip, improving optical power efficiency by five times, reaching a single-port bandwidth of 2Tb/s, and increasing network reliability tenfold.

Jensen Huang stated clearly in his speech: "In the future, we need more copper cable capacity, and we also need more optical chip and CPO capacity"—this means Nvidia will advance both copper cable and optical interconnect technologies simultaneously, providing flexible connection solutions for AI data centers of various scales.

From OpenClaw to NemoClaw: The "Shrimp Farming" Ecosystem

If hardware is the "plant" of the AI factory, then agents are the "workers." Jensen Huang called the open-source project OpenClaw "the most popular open-source project in human history," noting its adoption speed far exceeds Linux, and defined it as the "operating system" of the agent era.

To solve the security, privacy, and management challenges in the large-scale deployment of agents, Nvidia simultaneously launched the NemoClaw full-stack software platform. This tool, jokingly referred to by developers as "one-click shrimp farming," can complete the deployment, scheduling, and monitoring of AI agents with a single command.

NemoClaw integrates the Nemotron large language model and the OpenShell runtime environment. It not only allows agents to understand natural language instructions directly but also fills in key capabilities such as security sandboxes, privacy protection, and policy engines—developers can define the operational boundaries of agents through a visual interface to ensure they operate within compliance limits, even enabling privacy computing modes where "data stays local while computing is schedulable."

Jensen Huang predicted in his speech that every SaaS company in the future will transform into an AaaS (Agent-as-a-Service) company. Silicon Valley's recruitment market has already reacted, with many tech companies starting to use a new compensation system of "base salary + token quota" to compete for top talent in the field of agent development.

How DLSS 5 Reshapes Gaming and Future Computing

Additionally, Nvidia CEO Jensen Huang officially released DLSS 5 technology at the conference, calling it "the most significant breakthrough in computer graphics since real-time ray tracing in 2018," and even defining it as the "GPT moment" for graphics.

From the synchronized demo of "Resident Evil: Requiem" shown on-site, after enabling DLSS 5, the natural luster changes of character hair, delicate shadows in the folds of leather clothing, and dynamic reflection details on wet streets moving with the viewpoint all presented a quality previously only achievable with offline cinematic rendering. This marks a leap for real-time game graphics from "rule-approximated realism" to "cinematic immersion."

Unlike previous versions of DLSS that focused on AI upsampling or frame generation technology, DLSS 5 introduces the "Real-Time Neural Rendering" core architecture for the first time, directly generating complete pixels with lighting and material interactions through end-to-end trained AI models.

Jensen Huang emphasized in his speech that the revolutionary nature of this technology lies in solving the core contradiction of generative AI in professional creative fields—significantly enhancing visual realism while ensuring artists have absolute control over content. Developers can precisely control AI rendering effects through fine-tuned parameters such as intensity adjustments, color grading, and local masks, making AI a "smart assistant" for art teams rather than a "creative replacement."

Currently, DLSS 5 has gained support from top global game developers including Bethesda, CAPCOM, NetEase, Tencent, and Ubisoft. More than 30 games, such as "Assassin's Creed Shadows," "Starfield," "Naraka: Bladepoint," and "Where Winds Meet," will be among the first to complete adaptation this autumn.

Jensen Huang also pointed out that the technical value of DLSS 5 is by no means limited to the gaming field. The fusion paradigm of "structured data + generative AI" it represents will extend to broader scenarios such as enterprise computing, architectural visualization, and virtual production in the future.

At the same time, DLSS 5 is compatible with the entire existing RTX series platform and will be seamlessly integrated via the Nvidia Streamline framework, significantly reducing adaptation costs for developers and allowing more players to experience the leap in image quality brought by AI technology.

Dispelling Market AI Doubts

Goldman Sachs ( GS) released a research report immediately after the GTC 2026 conference, pointing out that Jensen Huang's remarks accurately hit on two core investor concerns, effectively alleviating market growth anxiety regarding the AI industry.

Goldman Sachs noted that Nvidia raised its 2027 data center business order guidance to $1 trillion, doubling the $500 billion target for 2026 announced last year. This long-term revenue commitment, which far exceeds Wall Street expectations, directly dispels concerns that "AI capital expenditures will peak in 2026," providing clear support for the industry's growth prospects.

Secondly, the LPX inference rack launched based on the acquired Groq technology marks a crucial step for Nvidia in the highly competitive inference market.

Goldman Sachs analyzed that the product's synergy with the Vera Rubin platform can increase throughput per watt by 35 times, creating more than 10 times the monetization space for trillion-parameter models. It accurately addresses the power bottleneck issues in data centers and is expected to start shipping in the third quarter of this year.

In addition, Nvidia's layout at the network and ecosystem levels also received recognition from Goldman Sachs. The mass production of Spectrum-X CPO switches, the vertical scaling CPO rack supporting 576 GPUs, and the NemoClaw platform for Agentic AI were all seen as key developments driving the implementation of enterprise-grade AI.

Goldman Sachs maintained a "Buy" rating and a $250 price target for Nvidia, believing that the capital expenditure plans of hyperscale cloud service providers will continue to consolidate its leading position.

Influenced by the positive news from the GTC conference, Nvidia's shares rose more than 4.8% during intraday trading on Monday before finally closing up 1.63%,

Disclaimer: The content available on Mitrade Insights is provided for informational and marketing purposes only. It has not been prepared in accordance with legal requirements designed to promote the independence of investment research and is not subject to any prohibition on dealing ahead of the dissemination of investment research
Nothing in this material constitutes investment advice, personal recommendation, investment research, an offer, or a solicitation to buy or sell any financial instrument. The content has been prepared without consideration of your individual investment objectives, financial situation, or needs, and should not be treated as such.
Past performance is not a reliable indicator of future performance and/or results. Forward-looking scenarios or forecasts are not a guarantee of future performance. Actual results may differ materially from those anticipated.
Mitrade makes no representation or warranty as to the accuracy or completeness of the information provided and accepts no liability for any loss arising from reliance on such information.
placeholder
India Gold price today: Gold falls, according to FXStreet dataGold prices fell in India on Monday, according to data compiled by FXStreet.
Author  FXStreet
Nov 24, 2025
Gold prices fell in India on Monday, according to data compiled by FXStreet.
placeholder
Ethereum Price Forecast: ETH recovers $2,850 support as BitMine's holdings cross 3.6 million tokensEthereum treasury company BitMine Immersion Technologies increased its ETH holdings last week.
Author  FXStreet
Nov 25, 2025
Ethereum treasury company BitMine Immersion Technologies increased its ETH holdings last week.
placeholder
Silver price today: Silver rises, according to FXStreet dataSilver prices (XAG/USD) rose on Wednesday, according to FXStreet data. Silver trades at $95.04 per troy ounce, up 0.84% from the $94.25 it cost on Tuesday.
Author  FXStreet
Jan 21, Wed
Silver prices (XAG/USD) rose on Wednesday, according to FXStreet data. Silver trades at $95.04 per troy ounce, up 0.84% from the $94.25 it cost on Tuesday.
placeholder
Crypto Majors Stall as Bitcoin, Ether, and XRP Struggle to Shake Off Bearish OverhangBitcoin steadies at $70k while Ethereum and XRP face key resistance levels; technicals show bearish MACD crossovers despite oversold RSI conditions.
Author  Mitrade
Feb 09, Mon
Bitcoin steadies at $70k while Ethereum and XRP face key resistance levels; technicals show bearish MACD crossovers despite oversold RSI conditions.
placeholder
Financial Markets 2026: Volatility Catalysts in Gold, Silver, Oil, and Blue-Chip Stocks—A CFD Trader's OutlookThe financial world is perpetually in motion, but the landscape for 2026 seems to be shaping up to be particularly dynamic. For CFD traders navigating global markets, this heightened volatility could present a distinctive set of challenges and opportunities.
Author  Rachel Weiss
Mar 05, Thu
The financial world is perpetually in motion, but the landscape for 2026 seems to be shaping up to be particularly dynamic. For CFD traders navigating global markets, this heightened volatility could present a distinctive set of challenges and opportunities.
goTop
quote