NVIDIA GTC Washington, D.C. Keynote Highlights: The Dawn of the AI Worker and Accelerated Everything

NVIDIA GTC Washington, D.C. Keynote Highlights: The Dawn of the AI Worker and Accelerated Everything

On October 28, 2025, NVIDIA CEO Jensen Huang delivered the GTC Washington, D.C. keynote, focusing on the future of technology across key strategic domains: AI, 6G, Quantum, Models, Enterprise Computing, Robotics, and Factories. His address positioned NVIDIA not just as a hardware leader but as the architect of a new industrial and economic era.

Part 1: The New Era of Accelerated Computing and AI

The keynote stressed that the world is currently navigating two simultaneous platform transitions: the fundamental shift from general-purpose computing to Accelerated Computing (GPU-based) and the explosive rise of Artificial Intelligence. CEO Jensen Huang positioned AI as the “new industrial revolution” and, critically, as the solution to the world’s severe labor shortage.

AI: From Tool to Worker

Huang drew a clear line between traditional software and AI, stating that AI fundamentally changes the nature of work:

  • AI as “Work”: Unlike traditional software tools (such as Excel, Word, or Data Base applications), AI creates workers (agents) that possess the capacity to perform complex work, use tools, and solve problems independently.
  • Economic Engagement: This transformation is expected to engage the entire $100 trillion global economy, a massive segment never before addressed by the traditional IT industry.

The AI Virtual Cycle

The industry has entered a virtuous cycle driven by what Huang termed the three scaling laws: pre-training, post-training, and thinking or inference.

  • As models become smarter, more people utilize them.
  • This increased usage and paid-for service requires significantly more compute power, which in turn fuels further AI development.
  • The result is two exponential demands being created by the three scaling laws in the AI world: exponential growth in Compute, and Adoption & Usage.

The AI Factory Architecture: Extreme Co-Design

With the pace of Moore’s Law slowing, sustained exponential performance gains can only be achieved through Extreme Co-Design which is the simultaneous invention of chips, systems, software, and model architectures.

The AI Factory Concept

Huang introduced the “Concept of the AI Factory”: a new type of data center dedicated solely to producing tokens (the computational unit/vocabulary of AI). The overarching goal of this factory architecture is two-fold: to maximize performance and to deliver the absolute lowest cost-per-token globally.

The strategy for achieving this involves re-architecting the entire stack through Co-Design, Scale-Up, Scale-Out, and Scale-Across.

Next-Generation Hardware Performance

  • Grace Blackwell (GB200) Performance: The new Grace Blackwell architecture, when combined into the Blackwell MVLink72 rack-scale computer, delivers a massive 10x generational performance increase over the previous Hopper generation, specifically for inference workloads. This speed gain is what directly translates to the industry’s lowest cost-per-token.
  • The Reuben Platform: Looking ahead, Huang unveiled the next-generation platform, Reuben. This system will feature completely cableless, 100% liquid-cooled rack-scale systems. Its architecture includes the revolutionary Blue Field 4 context processor, designed specifically to accelerate the massive memory access required for large-scale Key-Value (KV) caching in large language models (LLMs).

Financial Strength and “Made in America”

The massive technical roadmap is backed by staggering financial visibility:

  • Growth Outlook: NVIDIA has visibility into $500 billion of cumulative Blackwell and early Reuben business through 2026 (excluding China and Asia), representing a five-fold growth rate compared to the entire life of the Hopper platform.
  • Re-industrialization: The keynote affirmed a strong commitment to the “Made in America” strategy, celebrating the return of manufacturing. The Blackwell GB200 Ultra Super Chip and associated systems are entering full production across facilities in Arizona, California, and Texas.

Part 2: Accelerating Industries and Ecosystem

NVIDIA announced major platforms and partnerships designed to extend its accelerated computing model far beyond the traditional data center.

6G and Telecommunications

NVIDIA unveiled NVIDIA ARC (Aerial Radio Network Computer), a new platform built on Grace CPU, Blackwell GPU, and ConnectX networking, designed to run the CUDA-X library called Aerial.

  • Nokia Partnership: Nokia will integrate NVIDIA ARC into their future base stations and offer upgrades to millions of existing base stations globally. This partnership enables two key breakthroughs:
    • AI for RAN: Utilizing AI to improve spectral efficiency for massive power savings.
    • AI on RAN: Transforming base stations into distributed computing platforms to create an edge industrial robotics cloud like building an AWS-style cloud on the radio/wireless network.

Physical AI and Robotics

Physical AI, according to Huang, requires the synergistic work of three computers: the GB200 (Training), the Omniverse Computer (Digital Twin/Simulation), and the Jetson Thor (Robotic Computer/Inference).

  • Omniverse DSX: The Digital Twin Factory Experience (DSX) acts as a blueprint for designing, planning, and operating multi-gigawatt AI factories. This digital twin approach, backed by partners like Foxconn, Siemens, and Bechtel, significantly shrinks build time and optimizes operations, leading to billions of dollars in added annual revenue.
  • Humanoid Robotics: Collaborations were announced with leading robotics companies such as Figure, Agility, and Johnson & Johnson (for surgical robots).
  • Disney Partnership: A highlight included the simulation of the expressive, physically aware robot Blue using the revolutionary Newton simulator, showcased through a partnership with Disney Research.

Autonomous Systems

  • NVIDIA Drive Hyperion: This standard reference platform (comprising the sensor suite and compute) is designed to be robo-taxi ready. It is currently being integrated by major automakers, including Lucid, Mercedes-Benz, and Stellantis.
  • Uber Partnership: A new partnership with Uber was announced to connect these Hyperion-equipped vehicles into a global network, rapidly accelerating the deployment and adoption of robo-taxis.

Scientific Computing and Quantum

  • Quantum Computing (NVQLink): A groundbreaking interconnect that fuses QPUs (Quantum Processing Units) with GPU supercomputers. The NVQLink enables the high-speed data movement required for quantum error correction, control, and calibration, which is essential for scaling quantum computers using CUDA-Q and NVQLink.
  • Department of Energy (DOE) Initiative: The DOE is partnering with NVIDIA to build seven new AI supercomputers to ensure U.S. leadership in science, leveraging accelerated, AI-augmented, and quantum-enhanced computing.

Enterprise and Ecosystem

NVIDIA detailed key partnerships across the enterprise landscape:

  • Cybersecurity Defense: A partnership with CrowdStrike will integrate NVIDIA’s AI capabilities into their Falcon platform to create intelligent, fast-acting AI defense agents.
  • Data and National Security: Collaboration with Palantir will accelerate the Palantir Ontology platform, enabling governments and enterprises to process structured and unstructured data at unprecedented speed and scale.
  • Open-Source Commitment: NVIDIA committed to leading in open-source AI, currently contributing 23 models to leaderboards across language, physical AI, and biology, underscoring its importance for startups and researchers.
  • Global Integration: The NVIDIA full stack, including CUDA-X and open-source models, is now fully integrated across every major hyperscaler (AWS, Google Cloud, Microsoft Azure, Oracle) and industry SAS leader (SAP, ServiceNow, Synopsis, Cadence).
Watch: NVIDIA GTC Washington, D.C. Keynote with CEO Jensen Huang

Leave a Comment

Your email address will not be published. Required fields are marked *