HPE AI Inference solution featuring Qualcomm® Cloud AI 100

Delivering high-performance and energy efficient infrastructure for demanding AI/ML inference workloads.

High performance, energy efficient AI Inference from edge to cloud

AI Inference solution based on integration of Qualcomm Cloud AI 100 Accelerators with HPE Edgeline EL8000 and HPE ProLiant DL385 Gen10 Plus v2 servers. Dramatically improve server performance on AI/ML inference workloads for faster insight and higher accuracy data transformation of computer vision and natural language models.

Performance leadership

Performance leadership

The combination of Qualcomm Cloud AI 100 accelerators with HPE Edgeline and HPE ProLiant delivers high performance and reduces latency associated with complex artificial intelligence and machine learning models at the edge.

  • Qualcomm Cloud AI 100 SoC High-performance architecture is optimized for deep learning inference in Cloud and Edge
  • Fastest, densest AI Inference Solution in the world – MLPerf ™ 2.0
  • Peak Integer Ops (INT8) Up to 350 TOPS
  • Peak FP Ops (FP16) Up to 175 TFLOPS
What is AI Inference?

AI inferencing at the edge refers to deploying trained AI models outside the data center and cloud—at the point where data is created and can be acted upon quickly to generate business value.

Power efficiency

Power efficiency

The Qualcomm Cloud AI 100 utilizes industry leading low power technology from a decade of mobile technologies and delivers leading Performance per Watt.

  • 75W TDP power envelope – ideal for edge deployments
  • 6th generation Qualcomm AI core
  • Tensor unit design that is 5X more power efficient than a vector unit
  • 8 MB memory per core maximizes data re-use and lowers power
Learn more about HPE Edgeline AI Inference Solution featuring Qualcomm Cloud AI 100
Edge-to-cloud scalability

Edge-to-cloud scalability

HPE Edgeline can support up to eight Qualcomm Cloud AI 100 accelerators in a single EL8000 chassis at the edge.

  • Focus on AI/ML inference across Cloud and Edge applications
  • Architecture for scalable technology across generations
Learn more about HPE Edgeline AI Inference Solution featuring Qualcomm Cloud AI 100
Extensive software toolchain

Extensive software toolchain

Qualcomm Cloud AI 100 comes with a comprehensive set of tools that easily optimize and manage AI/ML trained models on AI Inference infrastructure.

  • Cloud AI 100 supports 200+ Deep Learning Networks across Computer Vision, Natural Language Processing, and recommendation.
  • Qualcomm software support compilation, optimization, and deployment starting from industry-standard ML frameworks such as TensorFlow, PyTorch, and ONNX. 
Learn more about HPE Edgeline AI Inference Solution featuring Qualcomm Cloud AI 100
HPE Qualcomm collaboration

An HPE and Qualcomm collaboration focused on AI Inference Computing

In April 2022 HPE announced a partnership with Qualcomm to provide customers with high performing and power efficient hardware acceleration for AI Inference workloads.

The AI 100 represents the first product collaboration between HPE and Qualcomm and utilizes over a decade of research and development to deliver the high performance, power efficient deep learning inference acceleration technology that is secure, manageable and performant.  
The HPE and Qualcomm partnership is significant because it further expands HPE’s AI solution portfolio in AI inference. The AI 100 is designed for AI inference computing, a key purpose of AI, which is to generate predictions from a trained model.

Learn more about HPE Edgeline AI Inference Solution featuring Qualcomm Cloud AI 100

HPE and Qualcomm delivers best-in-class performance and latency in AI Inference workloads from the edge to the datacenter

MLPerf Inference v3.0 benchmarks showcase HPE system performance integrated with Qualcomm Cloud AI100 accelerators


Qualcomm is a trademark or registered trademark of Qualcomm Incorporated. Qualcomm Cloud AI is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.