There are a number of key changes to DDR that introduce new design challenges. However, savvy designers will use the transition time to nail down solutions.
Server and system designers are gearing up to transition from DDR4 to DDR5 server dual-inline memory module (DIMM) buffer chipsets in their upcoming designs. A foremost consideration involves major specification changes. It is expected that designers will focus on the top (most significant) half-dozen of these changes to advance server designs (see Table 1).
Table 1: Major Changes for DDR5 (Source: Rambus)
Those are the data and clock rate, VDD (or operating voltage), power architecture, channel architecture, burst length, and improvements for higher-capacity DRAM support. These new changes present special design considerations covered in the second part of this article.
The top data rate for DDR4 buffer chips is 3,200 megatransfers per second (MT/s) at a clock rate of 1.6 gigahertz (GHz). DDR5 will start at 3,200 MT/s on the low end and quickly reach data rates of 6,400 MT/s and clock rates of 3.2 GHz, with discussions for speeds beyond that. Speed is, thus, increased significantly and so are the design challenges that go along with it.
VDD, or operating voltage, is the second major change that server and system designers will see. Here, the DRAM and buffer chip registering clock driver (RCD) will drop from 1.2 V down to 1.1 V. This will save power. However, it will also add some challenges to DIMM designs.
Because you will have a lower VDD, you will also have to be concerned about noise immunity along with VDD noise. Signal margins will be smaller because you’re now running from a 1.1-V supply versus 1.2 V; therefore, you must have good DIMM design and be cognizant of signal noise.
Power architecture comes in as major change number three. DIMMs will have a 12-V power management IC (PMIC) on them, allowing better granularity on the system’s power loading. Having this PMIC, which drops out to the 1.1-V supply, will also help with signal integrity and noise because you’ll have better on-DIMM control of the power supply.
A new DIMM channel architecture is perhaps one of DDR5’s major features, and this is the fourth change. DDR4 buffer chip DIMMs have a 72-bit bus, comprised of 64 data bits plus eight ECC bits. With DDR5, each DIMM will have two channels. However, they’ll be 32 bits plus eight ECC bits each, resulting in two 40-bit channels compared to one 72-bit data channel.
This helps you gain efficiency. It also makes the DIMM design more symmetrical because the left and right side of the DIMM coming from each channel share the RCD. Now you, the server and system designer, will have five 8-bit lanes at each channel for each side of the RCD. Hence, there are two DIMM channels with only one RCD, and it’ll have two sets of outputs, the A and B side.
Other features are added for improvements with this new channel architecture. In DDR4, there are two output clocks from the RCD for each side of the DIMM. In DDR5, there’ll be four output clocks per side. This gives each lane an independent clock, which helps with signal integrity for the clock signal.
The fifth major change is burst length. DDR4 burst length is eight and burst chop length is four. For DDR5, burst length and burst chop will be extended to increase burst payload, even with the narrower channel (32 bits versus 64 bits). Because there will be two channels per DIMM with equal or greater burst payload, memory efficiency will be increased.
A sixth change for DDR5 will be improvements for higher-capacity DRAM support. With DDR5 buffer chip DIMMs, the server or system designer can go up to 32-Gb DRAMs in a single-die package. DDR4 is currently maxing out at 16 Gb in a single-die package. DDR5 will support features like on-die ECC, error transparency mode, post-package repair, and read and write CRC modes to support higher-capacity DRAMs.
Things to think about
These new changes introduce a number of design considerations dealing with the higher DDR5 clock speeds. Thus, they raise a new round of signal integrity challenges. You’ll want to make sure that motherboards and DIMMs can handle the higher signal speeds. Also, when performing system-level simulations, you’ll want to make sure that you can ensure signal integrity at all DRAM locations.
The good news is that DDR5 buffer chips improve signal integrity for the command and address signals sent from the host memory controller to the DIMMs. As shown in Figure 1, the command address (CA) bus for each of the two channels goes to the RCD and then fans out to the two sides of the DIMM. The RCD effectively reduces the loading on the CA bus that the host memory controller sees.
Figure 1: The CA bus for each of the two channels goes to the RCD and then fans out to the two sides of the DIMM. (Source: Rambus)
For DDR4 designs, the primary signal integrity challenges were on the dual-data-rate DQ bus, with less attention paid to the lower-speed CA bus. For DDR5 designs, even the CA bus will require special attention for signal integrity. In DDR4, there was consideration for using differential feedback equalization (DFE) to improve the DQ data channel. But for DDR5, the RCD’s CA bus receivers will also require DFE options to ensure good signal reception.
The power delivery network (PDN) on the motherboard is another consideration, including up to the DIMM with the PMIC. Considering the higher clock and data rates, you will want to make sure that the PDN can handle the load of running at higher speed, with good signal integrity, and with good clean power supplies to the DIMMs.
The DIMM connectors from the motherboard to the DIMM will also have to handle the new clock and data rates. For the system designer, at the higher clock speeds and data rates around the printed circuit board (PCB), there is more emphasis placed on system design for electromagnetic interference and compatibility (EMI and EMC). You will want to make sure that you can pass standard requirements considering that layout will be more challenging as the speed goes up.
Savvy server and system designers will take this transition period to carefully analyze the design changes posed by DDR5 server DIMM buffer chipsets. At the forefront of those changes and challenges is the higher speed compared to DDR4. New materials may be needed for the higher-speed motherboards and DIMMs. Also, power plane routing needs to be considered to improve EMI and EMC characteristics.
Furthermore, as the CA bus increases in speed, you will want to make sure that buffer chips, the RCD on the DIMM, will have the appropriate DFE features to handle the CA bus at those rates without errors and to enable proper system operation.