Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Operationalizing machine learning: The future of practical AI

The possibilities for artificial intelligence in business are almost endless, but what must be done to truly integrate one with the other? The practice of MLOps may be the differentiator for success.

The key to delivering consistent business value with AI is to employ operational machine learning workflows that fully integrate machine learning models into standard enterprise processes in a reliable and repeatable fashion.

That's where MLOps comes in.

"There are fundamentally two things enterprises can do with machine learning: One is to make processes more efficient, and the other is to launch new products and features," says Piero Cinquegrana, data scientist and co-author of O'Reilly's "Machine Learning at Enterprise Scale."

These processes could be sales process, marketing measurement, operations, and tasks that are repeatable and automatable—all kinds of what Cinquegrana calls domains.

"Some classic use cases are measurement, such as scoring leads for sales so that sales account executives don't have to cold call a long list of unqualified leads," he says. "Any time you can measure things more effectively, you're spending dollars more effectively on channels or audiences that are more responsive to your messages. These are situations in which machine learning can intervene and make things more scalable and efficient."


MLOps is a framework

MLOps refers to the processes of creating a machine learning lifecycle and a more standardized process for machine learning workflows, according to Matheen Raza, product strategist for HPE's enterprise software business unit and Cinquegrana's co-author on the O'Reilly report. "It's actually a play upon DevOps, and it aims to bring DevOps-like processes to machine learning."

Like DevOps, MLOps uses CI/CD (continuous integration/continuous development) to roll out updates and improvements without stopping technological or business processes.

"MLOps is a framework to allow data scientists to quickly spin up their development environments, develop their models, train their models against enterprise production data, and then using the underlying container platform, push those models and the production code into a runtime," explains Matt Maccaux, global field CTO at HPE.

In simpler terms, MLOps is the application of processes to deliver consistent and reliable AI models in an enterprise. It is practical machine learning, and it has become extremely popular across all industries. 

The rise of the machines

The rise in the use of machine learning, and now MLOps, over the past several years harmonizes with the reason for an increase in AI in general: data.

More computing is happening at the edge. The Internet has become less of an entertainment and more of a utility, reaching 4.9 billion people, according to a study by Domo. Apps are increasingly popular. This has resulted in data—something that used to be considered more of an exhaust product than a resource—growing geometrically. Jim Short, lead scientist at the San Diego Supercomputer Center, estimates a data growth rate of 40 percent per year.

Now that most enterprise professionals consider data a resource—one that can make the difference between success and failure—they are willing to experiment with processes like MLOps in the hopes of leveraging that data for business gain.

You can use machine learning on small datasets, says Cinquegrana. "But really, you start seeing a lot of improvements when you have lots of data and you can put things in production, and so this goal has become much more attainable by enterprises."

MLOps lifecycle

An interesting element of MLOps is that its lifecycle is split between two very different user groups with two different jobs: data scientists, who create machine learning models and derive insights from their use on big data; and software engineers, whose primary concern is implementing and running the technology for the company as a whole and for whom it is more of an engineering function.

"There are discrepancies or differences between the production environment and the development environment," notes Raza. For IT, it's a problem of scaling.

"The more users you want to support, the more you've got to grow your admin staff and your IT team," says Raza. "But IT budgets are pretty much constant, so they are basically tasked with doing more with less." Data scientists, on the other hand, simply want their projects and models deployed into production.

These obstacles interfere with MLOps realizing its maximum potential.

Obstacle course

According to Maccaux, 80 to 85 percent of companies employing MLOps are running into the last mile problem: They can't put the models into practice. Some 60 percent of all ML models across the enterprise have been built but not operationalized, due to a lack of implementation tools.

So, to make MLOps more practical and more utilitarian across enterprises, several improvements need to be made. First, according to Cinquegrana, companies need good data infrastructure and, on top of that, good ML infrastructure.

As noted, the tools that allow for the creation of these infrastructures are complex and numerous. But according to Maccaux, a company's IT department cannot be the gatekeeper of these tools without also becoming the bottleneck for those who need to have free access to them. So MLOps will need a ready toolkit with maximum utility, ease of use, and security.

On the other hand, data scientists can also be bottlenecks, notes Cinquegrana. A practical and effective MLOps solution will provide data scientists flexibility of choice when it comes to ML tools and frameworks but also ensure certain process are followed to ensure collaboration.

At its most efficient and effective, says Raza, "MLOps must enable enterprises to standardize the machine learning lifecycle while providing users with the flexibility to deploy their machine learning applications across their choice of infrastructure—either on premises, in multiple clouds, or at the edge—while also maintaining enterprise-grade security and governance."

MLOps future

"Machine learning and MLOps are both resident in an emerging space, and everything is changing rapidly," says Maccaux. "It's a cool space to be in. In six months, how much will things have changed, especially with the current global landscape?"

The perfection of MLOps, he adds, is something "we get to figure out together."

Please watch: Spotlight on HPE Ezmeral ML Ops: Just the facts


This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.