HPE to join the Open Neural Network Exchange

May 31, 2018 • Mark Potter, HPE CTO and Director of Hewlett Packard Labs • Blog Post

IN THIS ARTICLE

  • HPE is joining the Open Neural Network Exchange (ONNX) to work alongside industry leaders in pushing open AI standards
  • Utilizing an open AI platform is smart for businesses as it makes fewer demands on customers
  • Open standards are necessary for AI because they give developers the freedom to choose the best software, engine and deployment strategy

We look forward to extending both our open source and our AI commitments by working as part of ONNX

I'm pleased to announce that Hewlett Packard Enterprise is joining the Open Neural Network Exchange (ONNX), a group that promotes the creation and adoption of an open standard to represent deep learning models.
 

We'll be joining Microsoft, Facebook, and Amazon, the founders of ONNX, and ONNX partners like AMD, NVIDIA, IBM and other industry leaders to push open artificial intelligence (AI) standards forward in the coming years. ONNX currently provides a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types, and is supported by a growing set of frameworks, converters, runtimes, compilers, and visualizers.
 

We made this decision so that we can contribute our own expertise in machine learning software and hardware to an industry-standard neural network exchange format. And in the meantime, we'll benefit from the experience of ONNX industrial partners and transfer that experience to our customers through our products and services.
 

Why open AI standards are smart for business

By supporting ONNX as a partner, HPE will continue to deliver high quality, open products and services to our customers that are compliant with ONNX standards. At the same time, we'll be able to effectively communicate our customer needs and influence the organization to consider them as well.
 

ONNX is a logical next step in our long history of open source collaboration - including our recent membership in the Gen-Z Consortium for fabric interconnect standards - and harmonizes with our focus on AI solutions. This is in the interest of our customers and our responsibility to our communities, while following our long-standing investigation into neuromorphic computing, with projects like the Cognitive Computing Toolkit.
 

Why open standards are necessary for AI
 

There are two stages when creating an AI. The first stage is to train your AI, to recognize one type of image, for instance, and distinguish it from others. But once you have your AI trained, you need to be able to use it.
 

That second stage - in which you employ the trained AI - is called inference. Free interoperability allows a developer to train with whichever software stack is best for the job, on a more powerful engine like a GPU for example, and then deploy it on any stack while running it on a less demanding engine, like a CPU or FPGA.

From a business perspective, the rationale for an open AI platform like this is the ability to reduce the number of demands on customers. And at the end of the day it's our goal to make implementing and using our products and services as smooth of a process as possible.
 

Our involvement in ONNX is about delivering real, tangible solutions to genuine questions about AI. And, it has the additional benefit of untangling the hype from the technology, which our industry could definitely use.

RELATED NEWS