
Nvidia’s new benchmark data reveals that GB300 NVL72 systems equipped with Blackwell Ultra GPUs achieve up to 50x higher throughput per megawatt and 35x lower cost per token compared to the Hopper platform for low-latency AI workloads. The metrics reflect combined hardware and software advancements targeting agentic AI and coding assistant deployments. Performance gains derive from specific architectural changes and library optimizations that address transformer attention layer bottlenecks. These efficiency improvements reduce operational costs for cloud providers and inference services, enabling broader deployment of compute-intensive models.
New data shows NVIDIA Blackwell Ultra delivers up to 50x better performance and 35x lower cost for agentic AI.
Cloud providers are deploying NVIDIA GB300 NVL72 systems at scale for low-latency and long-context use cases including agentic coding and coding assistants.
Learn how… pic.twitter.com/HIcTBXhwCd
— NVIDIA (@nvidia) February 16, 2026
Blackwell Ultra Tensor Cores provide 1.5x greater compute performance than standard Blackwell GPUs. The architecture doubles attention-layer processing via accelerated softmax execution, directly supporting reasoning models that utilize large context windows. Nvidia’s TensorRT-LLM inference library has recorded sustained performance increases, with SemiAnalysis benchmarks documenting that throughput per GPU doubled at certain interactivity levels since October 2025. The company states that these developments deliver a 10x increase in tokens per second per user and a 5x improvement in tokens per second per megawatt relative to Hopper. Cumulatively, these factors produce the 50x rise in AI factory output.
Chen Goldberg, senior vice president of engineering at CoreWeave, emphasized the operational focus of these advancements. “As inference moves to the center of AI production, long-context performance and token efficiency become critical,” Goldberg stated. “Grace Blackwell NVL72 addresses that challenge directly.” CoreWeave announced in 2025 that it was the first AI cloud provider to deploy GB300 NVL72 systems in production, integrating the hardware with its Kubernetes-based cloud stack.
Microsoft subsequently deployed what it describes as the world’s first large-scale GB300 NVL72 supercomputing cluster. Testing validated by Signal65 recorded the cluster achieving over 1.1 million tokens per second on a single rack. Oracle’s OCI platform is deploying GB300 NVL72 systems with plans to scale Superclusters beyond 100,000 Blackwell GPUs to support inference workload demand.
Leading inference providers, including Baseten, DeepInfra, Fireworks AI, and Together AI, reported up to 10x cost reductions using the standard Blackwell platform. The Blackwell Ultra platform extends these efficiencies to workloads requiring low latency, achieving a 35x lower cost per million tokens.
This reduction facilitates the economically viable deployment of AI agents and coding assistants at scale. Nvidia has previewed its next-generation Rubin platform, projecting a 10x performance improvement over Blackwell.