On-premises cloud experience accelerates data science

HPE GreenLake for ML Ops makes it easier and faster to get started with ML/AI projects and seamlessly scale them to production deployments. Within your data centre or colocation facility, deploy AL/ML workloads on HPE’s ML-optimised cloud service infrastructure featuring HPE Apollo hardware powered by HPE Ezmeral ML Ops – a solution designed to address all aspects of the ML lifecycle, from data preparation to model building, training, deployment, monitoring and collaboration. The HPE GreenLake edge-to-cloud platform offers consumption-based pricing, allowing you to consume these resources on premises with a cloud experience.

Solve for operational risk and data gravity issues

Avoid the compliance, security and data gravity issues of public cloud and the operational risk of running the infrastructure yourself. Workloads run right next to your on-premises data lake, letting you avoid hidden costs for data egress.  Let HPE take on the burden of keeping your AI/ML platform up to date with the latest software versions and fixes across the entire stack.

Empower data scientists and accelerate time to value

Let your data scientists focus on building models and not managing and configuring infrastructure, with HPE GreenLake for ML Ops. This Kubernetes-based modern extensible data science framework empowers data scientists to bring tooling and define workflows to build out data science algorithms across any data science use case.

Enjoy elastic pricing and cost monitoring

Reserve the capacity you need and pay-per-use for the resources you consume. With the ability to view your metered usage and associated costs, you can tie usage to specific business objectives.

Secure provisioning and management

Offload monitoring and management of your Data Science environment. With HPE GreenLake for ML Ops, your environment is securely managed from HPE IT Operations Centres, and through HPE GreenLake Central.


HPE GreenLake for ML Ops

Run your ML workloads with the security and control that an on-premises infrastructure provides. Pick from 2 configurations – Standard and Performance Optimised – that are built on an enterprise-grade high-performing hardware/software stack optimised for machine learning. Featuring consumption-based billing, this service provides:

  • Simple, transparent pricing model delivering on-premises service as operational expense.
  • Elasticity to support unpredictable workloads.
  • Reserved capacity + usage-based consumption model drives pricing predictability while supporting variable demand typical of data science workloads.
  • 4 year contract, paid monthly.
  • Support for the extensible Kubeflow framework, providing access to a broad and growing range of tools developed by the open source community.

See how HPE GreenLake for ML OPs works with a free trial

  • The HPE GreenLake edge-to-cloud platform is delivering results for enterprises around the world, and we want you to experience it for yourself. When you request a free trial of HPE GreenLake for ML Ops, we’ll install the instance and set up your account on HPE GreenLake Central – providing management capabilities, usage insights and detailed consumption reporting.
  • A 21-day trial is standard and can be extended, if needed.
  • A trial engineer is assigned and available throughout the trial period to help you navigate through the use case scenarios and to answer your questions.
  • Service is delivered through a co-located HPE data centre, so no equipment is needed on your premises.
  • Your trial includes access to a standard HPE GreenLake for ML Ops configuration – where you can bring your own data and validate your use cases with the HPE GreenLake platform.
  • There’s no cost to you.

GET THE DETAILS

Standard Configuration
Performance Optimised Configuration

Who is this recommended for

Companies with a data science team who want to use artificial intelligence and machine learning to solve business problems and need to run ML/AI workloads in an agile and secure manner on premises, without having to manage the infrastructure.

Companies with a data science team who are training deep learning models at scale, putting models into production or are running multiple data science projects concurrently on premises.

Hardware specifications

  • Compute: HPE Apollo 6500 (6 CPUs, 96 usable CPU cores) integrated with accelerated NVIDIA Tesla V100 or A100 GPUs (4) and HPE ProLiant DL360 integrated with NVIDIA Tesla T4 GPUs (4). 
  • Storage:  HPE Apollo 4200 with 228TB of usable storage.
  • Compute: HPE Apollo 6500 (6 CPUs, 120 usable CPU cores) integrated with accelerated NVIDIA Tesla V100 or A100 GPUs (8) and HPE ProLiant DL360 integrated with NVIDIA Tesla T4 GPUs (4). 
  • Storage:  HPE Apollo 4200 with 394TB of usable storage and 150TB of NVMe storage.

 

Software stack

  • HPE Ezmeral Runtime Enterprise and ML Ops software
  • HPE GreenLake for ML Ops is based on open source Kubernetes and the Kubeflow data science framework. Supported tooling includes: Grafana, Jupyter, Pytorch, Seldon, TensorFlow, Argo, R, Python, Pipelines and KFServing, along with required infrastructure components.
  • In addition to Kubeflow components, KubeDirector images provide additional tooling such as Jenkins, Git client and Kafka. 
  • Tooling such as the Applications Work Bench for HPE Ezmeral Runtime Enterprise and Helm are included so that both KubeDirector and Kubernetes applications and packages can be added to the HPE GreenLake for ML Ops.
  • HPE Ezmeral Runtime Enterprise and ML Ops software
  • HPE GreenLake for ML Ops is based on open source Kubernetes and the Kubeflow data science framework. Supported tooling includes: Grafana, Jupyter, Pytorch, Seldon, TensorFlow, Argo, R, Python, Pipelines and KFServing, along with required infrastructure components.
  • In addition to Kubeflow components, KubeDirector images provide additional tooling such as Jenkins, Git client and Kafka. 
  • Tooling such as the Applications Work Bench for HPE Ezmeral Runtime Enterprise and Helm are included so that both KubeDirector and Kubernetes applications and packages can be added to the HPE GreenLake for ML Ops.

Control plane

Secure, self-serve provisioning and management via a common control plane for the HPE Ezmeral Runtime Enterprise and HPE GreenLake Central orchestration.

Secure, self-serve provisioning and management via a common control plane for the HPE Ezmeral Runtime Enterprise and HPE GreenLake Central orchestration.

What is metered

Usage is metered based on the compute (per minute) and storage (per GB) used by the nodes in a cluster.

There are 4 meters used to calculate usage over the reserved capacity.

  • CPU cores – usage by minute 
  • V100 or A100 GPU – usage by minute 
  • T4 GPU  - usage by minute 
  • Storage – GB average usage per hour

Usage is metered based on the compute (per minute) and storage (per GB) used by the nodes in a cluster.

There are 4 meters used to calculate usage over the reserved capacity.

  • CPU cores – usage by minute 
  • V100 or A100 GPU – usage by minute 
  • T4 GPU  - usage by minute 
  • Storage – GB average usage per hour

Included services

  • HPE engineers perform initial setup and integration with your datacentre infrastructure. Service includes proactive and reactive support, with a single point of contact.
  • The service includes several days of post install technical engagement with HPE experts. You can use this service at your discretion. 
  • Full monitoring and lifecycle management of the HPE GreenLake for ML Ops infrastructure by HPE.
  • HPE engineers perform initial setup and integration with your datacentre infrastructure. Service includes proactive and reactive support, with a single point of contact.
  • The service includes several days of post install technical engagement with HPE experts. You can use this service at your discretion. 
  • Full monitoring and lifecycle management of the HPE GreenLake for ML Ops infrastructure by HPE.