Listen to the article
Hewlett Packard Enterprise and NVIDIA have unveiled an expanded portfolio of full‑stack AI and supercomputing systems, integrating advanced hardware and software to support research institutions, cloud providers, and sovereign deployments at extreme scale.
Hewlett Packard Enterprise has expanded its co‑engineering with NVIDIA to deliver a broadened line of full‑stack AI and supercomputing systems aimed at organisations running at the frontier of model scale and high performance computing. The refresh packages NVIDIA Vera and Rubin processor technologies with HPE’s Cray supercomputing architecture, liquid cooling and services, positioning the vendor to support large research centres, cloud providers and sovereign deployments seeking rapid time‑to‑insight. According to HPE and partner briefings, the move is intended to simplify rollouts for customers working on models and HPC workloads at extreme scale. (Sources: HPE,NVIDIA product blogs and collaboration materials)
At the hardware level, HPE’s Cray Supercomputing GX5000 family is being updated to include a new liquid‑cooled NVIDIA compute blade powered by up to 16 NVIDIA Vera CPUs per GX240 unit and configured for high‑density rack deployments. HPE describes the GX5000 platform as a second‑generation exascale‑class system designed to converge AI and HPC, offering multi‑tenant management and higher density than its predecessor to accelerate scientific discovery and engineering workloads. (Sources: HPE product pages,NVIDIA rack specifications)
Network options for these systems have been expanded to support large fabric deployments, with Quantum‑X800 InfiniBand switches and sixth‑generation NVLink scale‑up connectivity cited for use in rack‑scale and scale‑out configurations. The networking choices are presented as part of an integrated approach to power efficiency and throughput for clusters running mixed AI and traditional HPC jobs. HPE also highlights its liquid‑cooling integration and data centre design services as enablers for deploying these denser, higher‑power systems. (Sources: HPE product pages,NVIDIA technical overviews)
For neo‑clouds and service providers, HPE is offering a rack‑scale Vera Rubin NVL72 system engineered for frontier models beyond one trillion parameters, pairing up to 36 Vera CPUs with 72 Rubin GPUs plus ConnectX‑9 SuperNICs and BlueField DPUs in a cable‑free MGX‑style tray design. Complementing that, the new HPE Compute XD700 OCP‑inspired server built on NVIDIA’s HGX Rubin NVL8 is pitched for maximum GPU density, supporting up to 128 Rubin GPUs per rack to raise training and inference throughput while reducing space, power and cooling costs. HPE says these systems are intended to halve previous‑generation rack footprints for comparable GPU counts. (Sources: NVIDIA Vera Rubin NVL72 specification pages,HPE and NVIDIA collaboration announcements)
Software, tenancy and operational tooling are being emphasised alongside hardware. HPE’s AI Factory portfolio is promoted as certified for the NVIDIA Cloud Provider workflow and now supports multi‑tenancy via NVIDIA Multi‑Instance GPU (MIG) integrated with SUSE Virtualization and SUSE Rancher Prime Suite, while Red Hat Enterprise Linux and OpenShift remain supported for enterprise AI stacks. HPE also plans to offer NVIDIA Mission Control capabilities for workload orchestration, monitoring and autonomous recovery, positioning a managed operating layer for platform teams running at scale. (Sources: HPE solution pages,NVIDIA and HPE AI factory blogs)
HPE framed the announcement as a continuation of its push to bring converged AI and HPC to institutions and national programmes, saying the new systems will be used by research labs and service providers worldwide. “Having built the three most powerful, exascale supercomputers in the world, HPE is at the forefront of innovation that brings together cutting‑edge AI workloads with traditional HPC to accelerate scientific breakthroughs,” said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE, in the announcement. Chris Marriott, vice president, Enterprise Platforms at NVIDIA, added that “Together, HPE and NVIDIA have developed full‑stack AI infrastructure that unite accelerated computing, advanced networking and liquid cooling for faster time‑to‑insight in at‑scale and sovereign environments.” HPE points to recent wins and DOE partnerships as evidence of demand for the combined stack in national‑scale research and secure compute programmes. (Sources: HPE press materials,HPE newsroom release on DOE collaborations)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Fuse Wire ServicesServices


