Much like pre-DevOps software development, data science organizations still spend a significant amount of time and effort when moving projects from development to production. Model version control and code sharing is manual and there is a lack of standardization on tools and frameworks, making it tedious and time-consuming to productize machine learning models.

HPE Machine Learning Ops (HPE ML Ops) extends the capabilities of the BlueData EPIC platform and brings DevOps-like agility to enterprise machine learning. With the HPE ML Ops platform, enterprises can implement DevOps processes to standardize their ML workflows.

HPE ML Ops provides data science teams with a platform for their end-to-end data science needs with the flexibility to run their machine learning or deep learning (DL) workloads on-premises, in multiple public clouds, or a hybrid model and respond to dynamic business requirements in a variety of use cases.

What's new

  • Leverage the power of containers to create complex machine learning and deep learning stacks including TensorFlow, Apache Spark on Yarn with Kerberos, H2O, and Python ML and DL toolkits.
  • Spin-up distributed, scalable, ML, and DL training environments in minutes rather than months—on-premises, public cloud, or in a hybrid model.
  • Use your choice of tools to support even the most complex ML flow. For example, start with data prep in Spark, follow with training in TensorFlow on GPUs, and deploy on CPUs with TensorFlow runtime.
  • Implement CI/CD processes for your ML projects with a model registry. Model registry stores models and versions created within HPE ML Ops as well as those created using different tools/platforms.
  • Improve the reliability and reproducibility of ML projects on a shared project repository (GitHub).
  • Deploy models in production with reliable, scalable, and highly available endpoint deployment with out-of-the-box autoscaling, and load balancing.


Faster Time to Value

Manage and provision infrastructure through an intuitive graphical user interface.

Provision development, test, or production environments in minutes as opposed to days.

Onboard new data scientists rapidly with their choice of tools and languages without creating siloed development environments.

Improved Productivity

Data scientists spend their time building models and analyzing results rather than waiting for training jobs to complete.

BlueData,recently acquired by Hewlett Packard Enterprise, helps ensure no loss of accuracy or performance degradation in multi-tenant environments.

Increase collaboration and reproducibility with shared code, project, and model repositories.

Reduced Risk

Enterprise-grade security and access controls on compute and data.

Lineage tracking provides model governance and auditability for regulatory compliance.

Integrations with third-party software provides interpretability.

High-availability deployments help ensure critical applications do not fail.

Flexible and Elastic

Deploy on-premises, cloud, or in a hybrid model to suit your business requirements.

Autoscaling of clusters to meet the requirements of dynamic workloads.