Rebuilding the Computer for the Big Data Era
May 16, 2017 • Blog Post • Antonio Neri, EVP
IN THIS ARTICLE
- The largest single-memory computing system on the planet explained
- How Memory-Driven Computing will end the data crisis
HPE's new Memory-Driven Computing prototype is a 21st century solution to a 21st century problem
It's been just three years since we shared our vision to reinvent the building blocks of computing from the ground up. That effort, known as The Machine research project, was born of the realization that were asking today's computers to complete tasks that no one could have imagined 20 years ago, let alone 60 years ago at the dawn of the computer age.
That realization led us to develop a new paradigm that we call Memory-Driven Computing.
In November 2016, we announced that our new Memory-Driven Computing architecture had moved from idea to reality and was now running on real hardware. Today, we unveiled the next evolution of that system, a new prototype built on the Memory-Driven Computing architecture that is twenty times more powerful than the prototype we announced just six months ago.
When we began our journey, we believed that it wouldn't be enough to build a bigger, more powerful computer on top of the current architecture that underpins all technology today. With this milestone, we know that to be even more true today.
While computers have brought us tremendous advances over the past 60 years, the traditional computing architecture was not optimally designed to handle the data-intensive tasks of today's emerging connected world where everything - from homes to factories, wind turbines to city blocks - will compute. The time is coming when computers built around processors wont be able to keep pace with the amount of data pouring in, let alone our desire to extract instant insights from that data.
Memory-Driven Computing will redefine how computers - from smartphones to supercomputers - work and what they're capable of. In today's computing systems, memory and storage are entirely separate and accessed by the processor when needed. In fact, as much as 90 percent of the work in today's computing systems is devoted to moving information between tiers of memory and storage. With Memory-Driven Computing, we've eliminated those layers.
The new 160-terabyte (TB) system is the largest single-memory system on the planet and is capable of working simultaneously with vast amounts of data at a scale that has never before been possible within one system. Because it is built on our Memory-Driven Computing architecture (the only one in the world), we've removed the inherent inefficiencies to a degree that today's computers simply cannot match.
To put that into context, 160 TB is enough memory to simultaneously work with the data held in approximately 160 million books - five times the amount of books in the Library of Congress.Weve never been able to work with data sets of this size in this way.
Today's landmark achievement in Memory-Driven Computing means that the massive leaps forward are now within reach. And this is just the beginning.
Tomorrow's computers, driven by memory, will be different. This latest prototype demonstrates not only the tremendous potential of our ability to scale this technology, but also the kinds of form factors to which we can apply this level of compute to - from the largest supercomputers to the smallest edge devices - all occupying a smaller footprint than is possible today.
We are on the path toward solving the looming data crisis.
We are on the path toward solving the looming data crisis - solving a problem that we believe will be critical in applications as varied as creating data-driven cures for rare diseases, lighting up smart cities and even one day sending humans to Mars.
It's an exciting time for HPE. Today's news doesn't just represent the latest step in our research, but also, a giant leap forward in realizing a vision set in motion three years ago.