HPE is upgrading its Private Cloud AI stack with Nvidia technology and preparing a France-based AI Factory Lab where customers will be able to test out workloads.

In advance of the firm’s Discover event in Barcelona this week, HPE is disclosing details of some of the AI-related products and services it will be showcasing for customers. As you might guess, many involve GPU giant Nvidia.

For example, the latest RTX PRO 6000 Blackwell Server Edition GPUs are to be available across all of HPE’s AI-focused private cloud platforms, along with STIG-hardened NIMs. STIG refers to Security Technical Implementation Guides published by the Defense Information Systems Agency (DISA), and NIMs are inferencing microservices from Nvidia for deploying AI models at scale.

HPE is also adding support for GPU fractionalization, virtualization for Nvidia GPUs aimed at optimizing utilization and lowering costs, to those private cloud SKUs.

Agentic AI is the latest buzzword in the industry, so HPE says it is bringing in Datacenter Ops Agents to simplify datacenter management and enable operations across agentic AI and hybrid cloud environments.

Following the acquisition of Juniper Networks, which closed in July, the firm is also moving to integrate Juniper technology with its AI and private cloud services. The first fruit of this is Edge on-ramp using the MX family of routers to link a private cloud with users and devices, while Datacenter interconnect (DCI) employs PTX routers to connect AI clusters operating across long distances or across multiple clouds.

For storage, HPE announced Alletra Storage MP X10000 Data Intelligence Nodes with built-in capabilities to prepare data for AI processing.

“Most enterprises are discovering that their bottleneck to AI is not GPU capacity. It’s preparing the data for GPUs,” said chief technology officer Fidelma Russo.

She said the Alletra platform solves this by enriching and structuring data inline as it enters the system, performing metadata tagging, embedded vector generation, and formatting automatically.

“What this means to a customer is they don’t need a plethora of separate data prep tools before touching an LLM. And this is all built on our disaggregated Alletra Storage MP architecture, which means that you can scale capacity and performance independently. And what is the result? It’s a faster pipeline, it’s higher GPU utilization,” she claimed.

Meanwhile, HPE and Nvidia said that they intend to open an AI Factory Lab in Grenoble, France, for customers to try out and refine their workloads.

Due to open in Q2 2026, this facility is “dedicated to advancing AI factories, gigafactories and sovereign initiatives in the region,” Russo said. It will be equipped with the latest HPE and Nvidia AI factory technology, she added, “and we believe that this will be a large accelerator to helping our customers get their workloads ready for production into their environments.”

In the UK, Carbon3.ai says it is launching something similar. Its Private AI Lab, built on HPE’s Private Cloud AI platform, is intended to boost enterprise adoption of the technology by providing a workspace to take AI projects from pilot to production.

The firm says that it is building a sovereign AI infrastructure network for UK customers, claiming it is all powered by renewable energy.

“UK enterprises want secure, sovereign, and sustainable AI infrastructure they can trust,” said Carbon3.ai chief Tom Humphreys. “We’re accelerating enterprise adoption and helping the UK convert its AI potential into economic impact.” ®