SAN JOSE, Calif. — Rambus has working silicon in its labs for DDR5, the next major interface for DRAM dual in-line memory modules (DIMMs). The register clock drivers and data buffers could help double the throughput of main memory in servers, probably starting in 2019 — and they are already sparking a debate about the future of computing.

The Jedec standards group plans to release before June the DDR5 spec as the default memory interface for next-generation servers. However, some analysts note it comes at a time of emerging alternatives in persistent memories, new computer architectures and chip stacks.

“To the best of our knowledge, we are the first to have functional DDR5 DIMM chip sets in the lab. We are expecting production in 2019, and we want to be first to market to help partners bring up the technology,” said Hemant Dhulla, a vice president of product marketing for Rambus.

DDR5 is expected to support data rates up to 6.4 Gbits/second delivering 51.2 GBytes/s max, up from 3.2 Gbits and 25.6 GBytes/s for today’s DDR4. The new version will push the 64-bit link down to 1.1V and burst lengths to 16 bits from 1.2V and 8 bits. In addition, DDR5 lets voltage regulators ride on the memory card rather than the motherboard.

In parallel, CPU vendors are expected to expand the number of DDR channels on their processors from 12 to 16. That could drive main memory sizes to 128 Gbytes from 64 GB today.

DDR5 is expected to first appear on high performance systems running large databases or memory-hungry applications such as machine learning. While some servers may lag adopting DDR5 for six months or so, “it’s just a couple quarters, not a couple years…Everyone wants a fatter memory pipe,” said Dhulla.

About 90 percent of today’s servers use registered or load-reduced DIMMs that employ register clock drivers and data buffers. The chips generally are sold for less than $5 by companies including Rambus, IDT and Montage.

Continue reading on EETimes