What is Memory-Driven Computing?
Memory-Driven Computing definition
Memory-Driven Computing is an almost infinitely flexible and scalable architecture that can complete any computing task much faster, using much less energy, than conventional systems.
Why is Memory-Driven Computing needed?
Every two years, we create more data than was previously created through all of history. This exponential increase in data is coming from an explosion of sources, and we have a vanishingly small time to turn it into meaningful action. With transistor scaling slowing or stopped, the gap between what we need to accomplish and what we can accomplish is widening—essentially, our ambitions are growing faster than our computers can improve.
We also have a sustainability problem. Today, IT infrastructure consumes over 10% of global energy production, and that production can only grow slowly. Unless we do something radical, we’ll have an energy gap too. Clearly, we need a new kind of computer.
How does Memory-Driven Computing work?
The performance of Memory-Driven Computing is possible because it enables any combination of computing elements—processors, memory, accelerators, etc.—to be composed and communicate at the fastest possible speed—the speed of memory.
Today, the CPU vendor defines the architecture, meaning new technologies can only be introduced with a CPU update. If you want to connect more of something than the CPU has lanes for, you have to get a second CPU and deal with complex, slow communication between them. Memory-Driven Computing breaks those constraints. Any combination of components can be connected together at will over an ultra-fast web of interconnects (called a “fabric”). This allows it to communicate using simple, efficient commands at the speed of memory communications, hence the name “Memory-Driven Computing.”
HPE’s Memory-Driven Computing architecture