Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Accelerating time to insights by operationalizing machine learning in the enterprise

As enterprises move beyond experimentation with a few machine learning use cases to large-scale ML deployments, there's a need for a standardized approach to the ML workflow.

In most enterprises, data science teams spend a lot of time building accurate models that address specific business problems. But these models do not deliver any business value until they are deployed into another software application that then uses the model to deliver desired business outcomes. This is where the process breaks down and enterprises need tools and processes in place to be able to seamlessly move models into production.

The situation with the machine learning model development is similar to what we have seen with the pre-DevOps software lifecycle. DevOps tools and processes solved a lot of these issues for the software development lifecycle, but applying DevOps tools to ML will not work due to the iterative and experimental nature of machine learning model development.


Though most cloud vendors claim to be the panacea for operationalizing machine learning models, the reality is far from that. The cloud vendors offer disparate services that need to be stitched together for an end-to-end solution. Also, not all workloads can move to the cloud. Data has gravity, and it is most efficient to deploy your machine learning models close to your data. Enterprises need the flexibility to deploy workloads where their data resides—on premises, in the cloud, or at the edge.

Chris Gardner, Forrester

Victor Ghadban, HPE


This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.