Beyond DRAM and Flash, Part 1: The End Is Nigh

August 5, 2014 • Blog Post

The comfortable drumbeat of free progress in memory technology is coming to an end. What can be done?

The history of computing is inextricably linked with Moores Law, which says that the number of components on a chip of a given area will double every 18 months, for the same price. That last clause is actually not part of Gordon Moores original observation, but its the reason why today your cellphone has more computing power than a mainframe computer from the 60s, at a miniscule fraction of the size and an infinitesimal fraction of the cost. Remarkably, Moores Law has held true for four decades and has come to seem as reliable a descriptor of our industry as Ohms Law is of the current in a resistor.In the past, I written about the memory hierarchy and how its days are numbered. Now, I want to dive a little deeper into what the future holds for two of the technologies we use to hold our data: DRAM and Flash. In short, that future is not bright. We have to face up to the fact that both technologies are gently bumping up against a scale wall beyond which they will not get any better. The comfortable drumbeat of free progress is coming to an end. The implications are huge.

Why does this scale wall matter?

Were in the middle of data explosion[1]. Dealing with this deluge of data requires ever-more-capable computers to ingest and manipulate ever-larger data sets. We can no longer rely on our tried and trusted memory technologies to evolve. If your IT budget is growing exponentially along with the data, this isnt a problem. Is yours?

DRAM progress has almost stalled

The truth is that DRAM fell off the curve some years ago. As the figure shows, by the turn of the millennium we were already two years behind. Today, were nearly a decade in arrears.

Underlying Moores Law is a less well known, but more exact, law called Dennard scaling (appropriately named after the inventor of DRAM itself). Dennard scaling is concerned with the reduction of a range of physical and electrical parameters over time.The physical scaling is the problem. DRAM defines its state by storing electrons in a capacitor. Dennard scaling has reduced the size of the mouth of the capacitor and thus the amount of charge that can be stored and the space available to the insulating layers that keep the electrons from escaping. With a shrinking mouth, the only way to keep the capacitance at a usable level is to increase the depth. Capacitors are now approximately the aspect ratio of a drinking straw! That were able to make them at all is a testament to the ingenuity and persistence of chipmakers everywhere, but at some point a physical limit must be reached.

The Hammer Test

Every time you use your credit card, call an ambulance or sell some stock, chances are HPE servers will handle your transaction. These computing systems are crucial so, naturally, we take reliability testing very seriously. Each time a new generation of DRAM chips comes out, we gather samples from every manufacturer and subject their chips to the Hammer Test. The Hammer is an automated test that subjects DRAM chips to extreme conditions to make sure theyre up to the job. A couple of years ago, when 28nm DRAM devices were coming onto the market, we were shocked when every single one failed the Hammer. We were able to work with our memory vendors to achieve acceptable quality, but it was clear that scaling down had started to affect reliability. Thats when we accelerated our search for alternative technologies.

Flash scaling is also starting to slow

Flash memory technology isnt quite as close to its end-of-life. Whereas a DRAM cell consists of a transistor and a capacitor, a Flash cell is relatively simpleessentially just a four-pole transistor with the data stored as electrons trapped on the fourth, floating gate. This makes it much easier to scale (at the expense of read/write speed and the ability to address individual bits). However, the scaling problem persists: as Flash moves to finer manufacturing nodes the number of electrons storing each bit drops and data retention suffers. Our assessment is that scaling Flash much beyond the 20nm node is problematic.

One solution to scaling is to store up to three bits per cell8 possible values. Unfortunately, that means dividing an already small number of electrons still further, and the lifetime of multilevel cells suffers as a result.Manufacturers are turning to vertical scaling of cellscalled 3D-NAND Flashto fit more cells per square millimeter of silicon without making individual cells smaller.This is another remarkable piece of chip-making ingenuity, but its not easy and thus not cheap to do. There is an expectation that price-per-bit will fall more slowly than in the past and that the lifetime of 3D-NAND will be short, reaching its end-of-life before the end of the decade.In addition, performanceread and write timesis being sacrificed to scaling. As in the case of DRAM, HPE saw the need for a replacement technology coming some years ago.

We need a step changeenter the Memristor

The industry could wait for breakthroughs in DRAM and Flash technology that will steer us back onto the path of Moores Law. I dont think thats going to happen. However, we can steermemoryas a whole back by replacing DRAM and Flash with a technology thatdoesscale.Our money, as everybody knows by now, is literally on the Memristor as a replacement for both DRAM and Flash.Ill explore how we came to that conclusion in part two. 1. L. M. Grupp, J. D. Davis, and S. Swanson.The bleak future of NAND Flash memory. In Proceedings of the 10th USENIX conference on File and Storage Technologies (FAST) , 2012 

RELATED NEWS