DDR3 memory seems to last quite long now as the unchallenged memory standard. As DDR4 finally makes the inroads in 2014, what should we expect from it?
Now, in Spring 2012, as Intel launches their initial Ivy Bridge processor parts, and AMD follows with Trinity APU, you can count how many generations of CPUs succeeded each other over the past, say, six years.
Remember the Core 2 Quad in 65 nm and 45 nm generation s, then followed by the Nehalem Core i7, then Westmere, then Sandy Bridge and now Ivy Bridge? Well, these six processor generations, just like their Athlon and Phenom counterparts at AMD, depend on DDR3 as the memory standard for these six years or so, and looks like it'll continue this way for another two years. This is quite a bit longer than roughly four years that DDR2 and DDR1 each lasted respectively as the dominant memory types before that.
One reason for its long time success is that, with the semicon process improvements, it was – and seemingly still is – easy to prop up the DDR3 net throughput speed far, while still keeping reasonable latency. The starting point was DDR3-1066, while now we routinely see DDR3-2500 or higher module speeds – in fact, Ivy Bridge seems having no problems to handle DDR3-3000 speed grade well.
Also, if even that bandwidth improvement isn't enough, the CPU and packaging technology evolved enough to allow quite affordable quad-channel memory in each socket, providing more than sufficient throughput for almost any case.
Anyway, the successor is on the horizon, after the long wait. The DDR4 standard is being finalised, and, even though it is expected to take two years still till we see it in major mainstream products, there are some key pointers that show the upcoming memory benefits.
Firstly, the speed grades. The initial base speed grades will be DDR4-2667 and DDR4-3200, with reasonably quick ascension to DDR4-4000 and 4266 levels expected within half a year from the launch. To help reduce the latency somewhat, the initial DDR4 chips will have more banks, 16, on each die, so that more open pages at the same time can result in lower average latency.
Then, the power saving. With 1.2v being the standard voltage, and options for all the way down to 1.05 volts DDR4L, expect cooler DIMMs as well. Since the power reduction is the square of voltage reduction, there is an obvious benefit there.
After that, design changes expected at the module and board level. DDR4 may have a similarity with Rambus memory as it is more of a point-to-point interface, with less load expected per each memory channel. That would help both achievable bandwidth and latency – a helpful move since the initial DDR4 parts are expected to all have double-digit CL latency figure… how about CL15 at DDR4-3200?
Finally, there are error connection and management benefits too. DDR4 has better ways of handling parity and ECC errors than previous memory types, and it can provide recovery from both command and parity errors without crashing the system. This is particularly useful for the server implementation, like the ones expected in Haswell EX some two years from now, for instance.
In summary, DDR4 will bring along even higher bandwidth, but accompanied with higher latency settings. This problem might impact the desktop and workstation benchmarks at the early stage, however, just as with DDR3, more optimised dies should appear over time.
On the other hand, its further power savings should help in the adoption. Companies like Samsung and Hynix are expecting samples of higher speed DDR4 chips this year, with high density 4 Gbit dies following next year. But, again, any real deployment has to wait till 2014, when at least some Intel and AMD platforms are expected to support the new memory.