The Machine seeks adventurous developers

"The world's largest single-memory computer," part of an ongoing research project called The Machine, will require developers and organizations to work on it in order to succeed commercially.

For the past few years, Hewlett Packard Enterprise scientists have been working on a new computing architecture designed to replace the processor-centric machines that have defined computing since the 1940s. The effort is driven by the impending demise of Moore's Law, Gordon Moore's remarkably durable 1965 prediction that the number of transistors on a microprocessor would double roughly every two years for the foreseeable future.

The processor development curve has flattened in recent years as chip designers run up against the hard laws of physics. Meanwhile, demand for computing continues to grow exponentially, with global data volumes projected to double every two years ad infinitum. 

Enter HPE's Memory-Driven Computing architecture, which combines photonic data transmission with non-volatile memory (NVM) that retains information even when it isn’t drawing power, and systems-on-a-chip that package processors and memory to greatly speed data processing. As part of an ongoing research program called The Machine, HPE recently announced what it claims is “the world’s largest single-memory computer,” a system with 160 terabytes of memory, displayed at the company's biannual Discover conference in Las Vegas.

The company says that 160 TB of memory is equivalent to the data contained in approximately 160 million books, or five times the collection of the Library of Congress. In coming years, it expects to scale this memory pool to 4,096 yottabytes, or 250,000 times the total store of data in the world today. 

HPE has released a commercialization roadmap for the Machine that promises a commercial petascale system by 2020, followed by commercial exascale computing.  

Interested customers can get on the road as they wait in line. If we can get you a head start in transferring your software to run on this environment, you’ll only have to make the transition once.

Andrew Wheelerdeputy director of Hewlett Packard Labs

Memory and loss

One early adopter is the German Center for Neurodegenerative Diseases (DZNE), which is using Memory-Driven Computing systems from HPE to analyze big data pertaining to Alzheimer’s disease.

“DZNE started talking to us last Christmas,” says Sharad Singhal, director of Machine applications and software for Labs. The center turned to Memory-Driven Computing when it found that traditional computing systems were inadequate to process the gargantuan data sets involved in its research.

DZNE has been conducting a huge longitudinal study that has followed 36,000 people for 30 years in an effort to determine who develops Alzheimer’s and why. It has begun adding in genomic data to the study as well.

With its old system, a supercomputer cluster, it took five and a half days to run its data through a pipeline—a series of programs in which the results of each is fed into the next. The first pipeline through the prototype took 22 minutes. Now, weeks later, pipeline processing time is down to two and a half minutes.

The pipeline was devoted to genomic assembly, which involves matching bits of automatically sequenced genome into what DZNE’s Dr. Joachim Schultze calls a “book of life.” HPE's Memory Fabric Testbed, as the prototype is also known, has allowed the researchers to compress that process.

The need for speed is not just about efficiency, according to Schultze. When you are waiting almost six days for assembly to take place, your sensitivity to pattern recognition dulls. Schultze’s colleague Dr. Pierluigi Nicotera notes that the accelerated process allows scientists to examine more results for more patterns that could lead to an understanding of what causes certain neurodegenerative states.

What the prototype allowed DZNE to do, Schultze says, was to move from input to output “in one thought process.” 

The Machine: Our vision for the future of computing

The Machine User Group

The Machine project was built in large part on open source code and is meant to be transparent. Enlisting developers and organizations like DZNE to work on the platform will be crucial to its commercial success.

On June 6, 2017, HPE announced the formation of the Machine User Group, which will convene companies and developers to collaborate on programming for Memory-Driven Computing. Singhal led a discussion forum at Discover that made a case for programming on this new platform and walked developers through the available resources, including a large and growing Developer Toolkit.

The toolkit includes programs and libraries for programming for Memory-Driven Computing. Among the tools offered are the following: 

  • Large Scale Graph Inference: An algorithm useful for Internet of Things applications and malware detection
  • Similarity Search: Used for high-dimensional objects
  • Managed Data Structures: A software library for persistent memory programming
  • Sparkle: A Spark-based cluster computing system for big data
  • Performance emulation for non-volatile memory latency and bandwidth: A DRAM-based performance emulation platform
  • Non-Volatile Memory Manager: A C++ library for simple abstractions for accessing and allocating NVM from fabric-attached memory
  • Linux for Memory-Driven Computing: Includes support for fabric-attached persistent memory
  • Fabric-Attached Memory Emulation: An exploratory environment 

Platform play

The Machine does not have a specific form factor. It is more a platform than a device, and its component technologies are already being implemented in HPE's present-day product line. HPE ProLiant servers incorporate ideas from Memory-Driven Computing on a smaller scale, while the Superdome X and MC990 X servers incorporate large in-memory platforms.

“Interested customers can get on the road as they wait in line,” says Andrew Wheeler, deputy director of Hewlett Packard Labs. “If we can get you a head start in transferring your software to run on this environment, you’ll only have to make the transition once.”

Related links:

Up the value of your analytics with a better platform architecture

Persistent Memory: The Ideal Technology for Optimizing Enterprise Workloads

Getting started with Memory-Driven Computing: Tools you can use today

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.