AI Compute
HPE ProLiant Compute DL384 Gen12

Achieve next-level performance for mixed, memory intensive and AI workloads like fine tuning and inference with Retrieval Augmented Generation (RAG).

View the product animation

Deploy at scale with rack-based solutions for any AI destination

As part of NVIDIA AI Computing by HPE, HPE ProLiant Compute DL384 Gen12 with NVIDIA GH200 NVL2 handles next-level performance and scale-out fine tuning and inference with RAG.

Accelerate the shift to generative AI

Leverage artificial intelligence (AI), particularly large language models (LLMs) for AI fine-tuning and inference with RAG. Enable new generative AI (GenAI) applications such as text generation, language translation, coding, visual content, and many more.

Maximize data center utilization

NVIDIA GH200 NLV2 with 1.2 terabytes of fast, unified, and coherent memory supports mixed and memory-intensive workloads for next-level performance and maximizes data center utilization for AI computing tasks.

Get scale-out accelerated computing and enterprise AI productivity

Designed to deploy large language models for AI fine-tuning and inference with RAG with 3.5x capacity and 2X higher performance, this versatile scale-out platform significantly enhances computing capabilities.  For faster enterprise AI deployment and success, leverage HPE Private Cloud AI.

Our partners and clients

More ways to explore

Unlock AI

Simplify AI complexity, accelerate productivity, and get pilots to production faster.

HPE Compute

Accelerate innovation from edge to cloud with workload-optimized compute for today’s data-first, hybrid world.

HPE Private Cloud AI

Accelerate your path from AI pilot to production with a turnkey AI private cloud.

Take the next steps

Ready to get started? Explore purchasing options or engage with HPE experts to determine the best solution for your business needs.