Skip to main content

The Open Compute Project: Not just for servers anymore

What started as standardization for server technology has grown to touch every aspect of the data center.

If I were to ask what AT&T, Facebook, Alibaba, Google, Hewlett Packard Enterprise, and more than 40 other major companies have in common, it's unlikely you'd respond, “They are all members of the Open Compute Project.” But the fact is that these companies, which equip and operate the largest data centers in the world, all have provided design, engineering, and production expertise as open standards for any member organization to use to develop standardized performance and energy-efficient products.

The OCP was launched by Facebook in 2011 as an initiative to improve data center equipment design and operation. Seven years into the project, the OCP is a big deal. Beyond Facebook, project members that both use OCP technologies and have contributed to the standards include monster data center operators Microsoft and Google. 

Early days of the Open Compute Project

Initial OCP projects ranged from the simple but significant rack redesign—Open Rack and OpenU allow more equipment and better cooling than standard data center equipment racks while occupying the same footprint—to server designs built to open standards, optimized for the Open Rack and issues commonly faced in high-density data centers. By doing this, Facebook took what it learned in equipping huge data centers and shared that information with the public. Facebook also solicited other vendors that built products in the data center space to join it in providing open standards and open standards-based equipment. The initial server designs fundamentally took the concept of "white box" servers a step further, laying out a standard server design specific to the needs of high-volume, high-density servers. Since 2011, the server design has been continually updated to take advantage of the latest in processors from Intel and AMD and is now in its seventh generation.

These open standards efforts were not focused on just servers and racks. As the server standard adoption grew, Facebook contributed its efforts, adding an open networking switch standard to the group. It worked with a Taiwanese vendor to introduce the Wedge switch standard, which runs a customized open sourced version of Linux. Other vendors, such as Mellanox, introduced switch specifications to the standards, supporting multiple different operating systems as well as open standards such as Open Network Install Environment (ONIE), a bare-metal switch ecosystem. The switch technologies have evolved, with the most recent versions embracing 100 Gigabit Ethernet.

The data center core is a triumvirate, however, and in addition to compute and networking, every data center needs storage. To this end, the OCP announced the Open Vault standard, a 2U Open Rack chassis originally designed to support 30 hot-swappable serial-attached SCSI drives. This design evolved as well: A NVMe JBOF (just a bunch of flash) design called Lightning was eventually released and by 2017 adopted by a number of vendors.

Flash forward: OCP projects grow in scope

Many of the OCP standards have been acknowledged to be entry-level designs. While optimized for data center use, the standards do not represent the ultimate in performance or efficiency.

They are, however, a common starting point for design growth and evolution. The project has been a successful one, with revenue generation from OCP equipment, not including that of board member companies, reaching $1.2 billion in 2017, according to an announcement at the 2018 OCP Summit.

As the OCP has grown, so has the scope of its projects. There are now 11 discrete OCP project groups, focused on:

  • Compliance and interoperability: Establishing a framework to simplify the process by which qualifying solutions can use the OCP brand.
  • Data center facility: Focusing on data center facility operations, including power, cooling, layout and design, and monitoring and control.
  • Hardware management: Working with existing tools, best practices, and remote management that can scale with the data center.
  • High-performance computing (HPC): Possibly the most ambitious in scope, seeking to develop a complete HPC platform: a multi-node networking and fabric platform that will be completely agnostic, allowing the use of everything from general-purpose x86 CPUs to GPUs and custom-designed ASIC hardware.
  • Networking: Working toward the creation of fully open and disaggregated technologies to move beyond proprietary and closed networking switch technologies.
  • Rack and power: Continuing the Open Rack momentum and working toward fully integrating racks and rack-level power as data center design elements.
  • Server: Expanding original effort to add the latest in server technologies, including ARM processors.
  • Storage: Evolving the Open Rack storage chassis along with components and peripherals.
  • Telco: Exploring the possibilities of applying the OCP model to the telco environment.

And two focused on technology incubation:

  • Open system firmware: Described as “an open source firmware project lead by contributors, code committers, and a technical steering committee," looking to create and deploy, at scale, an open source hardware platform initialization and OS load firmware optimized for web-scale cloud hardware. That includes "documentation, testing, integration, and any other artifacts that aid the development, deployment, operation, or adoption of the open source project."
  • Security: Working on the designs and specifications that enable software security for all aspects of the Open Compute community.

The benefits are clear and well-demonstrated by the success of the open source software community. Project success, however, will be determined by the continuing willingness of participating vendors to contribute technology and knowledge to this standards model. That willingness, in the form of contributing intellectual property, is likely more easily achieved in the software world, though the OCP's current success seems to indicate a general willingness on the part of hardware vendors to both contribute intellectual property and develop, deploy, and sell products that make use of the OCP technologies.

Sure, there will always be vertical markets where "performance at all costs" outweighs the advantages of collaboratively designed and deployed systems, and markets and customers that are willing to spend the money on compute, networking, and storage that can gain them a competitive business edge. But OCP is focused on the mainstream of the data center market.

As we move into a more cloud-driven compute world, the number of customers for general-computing hardware on this scale will decrease, but the scale of the hardware being deployed will increase. That hardware needs to become increasingly more powerful, efficient, and cost effective.

Open Compute Project: Lessons for leaders

  • The OCP is a big deal. Launched in 2011 to improve data center equipment design and operation, every large data center operator is involved.
  • OCP standards are optimized for data centers but do not represent the ultimate in performance or efficiency. However, they are a common starting point.
  • There are now 11 discrete OCP project groups, including two focused on technology incubation.
  • Project success will be determined by the continuing willingness (in the form of contributing intellectual property) of the participating vendors.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.