Easing PCIe 6.0 Integration

Article By : Madhumita Sanyal, Synopsys

PCIe data rates are moving from 32G to 64G because of the data explosion and increasing bandwidth for high-performance computing (HPC).

Because of the data explosion and increasing bandwidth for high-performance computing (HPC), we are seeing PCI Express (PCIe) data rates moving from 32G (PCIe 5.0) to 64G (PCIe 6.0). In addition, since NRZ no longer supports the higher data rates, PCIe 6.0 is moving to PAM-4 signaling. The higher volumes of data and faster data movement for computing, networking, and storage are pushing performance and latency optimization to the highest levels.

Figure 1 shows the inside of a server box in a server rack unit, illustrating the necessary shift from PCIe 5.0 data rates to PCIe 6.0 data rates for network interface cards (NICs), SSDs and overall chip-to-chip connectivity, as well as alignment with network speeds from 400G to 800G to 1.6T Ethernet. A server box has a fixed dimension; hence it maintains similar footprint and form factor. PCIe 6.0 can’t grow bigger and maintains similar latency to PCIe 5.0. Higher power is needed to push the data at higher speeds through CPUs, GPUs, SSDs, accelerators, and NICs. However, the entire chassis can heat up and require cooling to keep the components at a safe operating temperature, which can consume additional power. Hence, power consumption, system latency, and area challenges become key parameters to consider, forcing SoC designers to re-architect their HPC designs. This article outlines how designers can overcome the power, performance, area and latency challenges of PCIe 6.0 designs using pre-validated and comprehensive PCIe IP solutions.

Figure 1: Inside of a server operating at higher PCIe 6.0 and 400G/800G/1.6T Ethernet speeds

To achieve best performance, PCIe systems are optimized with faster clocks, which means increased latency because design changes can add pipelining to meet timing. Area and power impacts are also added. Because of these reasons SoC architectures are going through a shift. SoC designers need to find a balance between faster performance with lowest latency while minimizing area and power. Optimizing the 4 parameters – power, performance, area and latency – in a PCIe 6.0 design implementation requires designers to make tradeoff analysis, which is time consuming.

Mitigating all these challenges require strong EDA tools expertise and IP design knowledge. When designers make SoC-level partitioning and floorplanning decisions, often times, physical design aspects of the IP interfaces, such as the PCIe PHY and controller timing feasibility (performance) are not fully considered. On the other hand, designs are getting larger with added functionality and additional number of physical functions like security. PCIe design size and timing complexity are making SoC-level integration increasingly challenging.

For added flexibility and to support different slots, a set of PCIe lanes are partitioned into multiple links of smaller width, also known as PCIe bifurcation. For added bandwidth, PCIe aggregation is used to increase the number of transactions.

  • PCIe x8 card slot could be bifurcated into two x4 slots
  • PCIe x16 into four x4 i.e., x4 x4 x4 x4 OR two x8 i.e., x8 x8 OR one x8 and two x4 i.e., x8 x4 x4 / x4 x4 x8

Support for PCIe bifurcation and aggregation requires multiple controllers with a PHY IP that supports multiple lanes. For example, if a user needs x16, x8, and 2 x4 bifurcation for a x16 link, 4 different 2 x4, x8 and x16 controllers are required with a x16 PHY. Understanding TX and RX data flow, maintaining lane-to-lane transmitter (TX) skew requirements with 4 to 16 lanes, achieving multiple clock alignment strategies between x16 PHY lanes to controller and building balanced clock tree across 16 lanes can be very challenging.

Figure 2 shows clock balancing challenges for a x16 link and multi-link PCIe 6.0 PHY and controller tiles. For a x16 link, one clock from one PHY lane (part of 16 lanes config) must go through the x16 controller, requiring clock tree load balancing across all lanes.

Figure 2:  Clock architecture in a single-link and multi-link PCIe 6.0 configuration

Due to SoC-level floorplanning constraints, PCIe PHY and controller tiles need to be placed in north-south (N/S) and east-west (E/W) orientation, which present their own set of challenges in physical design as well as high-speed differential signal escapes in package design.

Floor-planning Challenges

Due to die size limitation, optimizing beachfront bandwidth is the key requirement for SoCs. In this section, examples with single- and multi-link PCIe are used to demonstrate optimized beachfront and reusability of PHY and controller tiles in SoCs.

Figure 3 shows a x16 PCIe 6.0 PHY (or PPA-optimized four x4 PCIe 6.0 PHYs abutted) in N/S edge of the die placed in a single row and interfacing with a x16 controller.

Figure 3: Single x16 PCIe link in N/S orientation

Figure 4 shows four x4 PCIe 6.0 PHYs in E/W orientation placed in a single column and interfacing with a x16 controller.

Figure 4: Single x16 PCIe link in E/W orientation

Figure 5 shows a x16 PCIe 6.0 or four x4 abutted PHY IP placed in N/S edge of the die and interfacing with multiple PCIe controllers with x4, x8 and x16 link width.

Figure 5: Multiple link configuration on N/S orientation

Figure 6 Shows four X4 PCIe 6.0 PHYs placed in E/W edge of the die and interfacing with multiple PCIe controllers with x4 x8 and x16 link width.

Figure 6: Multiple link configuration on E/W orientation

In a PCIe switch SoC, there are multiple instantiations of PCIe links that require multiple Physical Medium Attachment (PMA) depth in N/S and E/W chip periphery or beachfront. To ease timing challenges, a single PCIe tile can be implemented in a bottom-up partitioning and floorplanning with an optimum aspect ratio. However, multiple instantiations in all edges of the chip demand a top-down approach. Hence, extensive what-if analyses of PCIe 6.0 links implementation with various floorplanning and placement scenarios are needed for higher frequency timing closure as well as single-link or multi-link PCIe tile aspect ratio and chip beachfront optimization. The what-if analyses include higher clock frequency optimized datapath, clock tree balancing techniques, robust pipelining for controllers and PHYs (if they must be placed apart due to SoC-level floorplanning restrictions), and x8, x16 controller aspect ratio.

Reference clock forwarding studies and impact on signal integrity analysis are also of big concern for high frequency designs. Power integrity analysis with tens of PCIe lanes and multiple PCIe links needs to be addressed.

For further beachfront optimization, the PCIe 6.0 PHY IP can be stacked in double N/S rows and double E/W columns of the die edges. PCIe 6.0 IP needs to be designed with optimized bump locations. System designers must perform package escape studies and find out the required number of layers to optimally route the differential signals through N/S and E/W beachfront to meet the IP signal integrity specification, all of which require exhaustive system simulations to validate. Figure 7 shows four x4 PCIe PHYs placed in 2 stacks on N/S edge of the die and interfacing with four x4, x8, x16 controllers to form one multi-link x16 PCIe6 tile

Figure 7: Double-stacking four x4 PCIe 6.0 PHYs in N/S orientation

Figure 8 shows four x4 PCIe PHYs placed in 2 stacks on E/W edge of the die and interfacing with four x4, x8 and x16 controllers to form one multi-link x16 PCIe 6.0 tile.

Figure 8: Double-stacking four x4 PCIe 6.0 PHYs in E/W orientation

With CXL 2.0, the key consideration is 2GHz synchronous interface and SoC interface to PCIe/CXL tile. This interface can be extremely timing critical depending on pin placement and SoC logic placement.

Addressing and Converging Design Implementation Challenges

PCIe PHY placements in single row/single column (single stack) orientation and multi-link bifurcation with double row in N/S and double column in E/W (double stack) orientation need exhaustive studies that require EDA tools expertise and IP design and implementation knowledge.

Summary

PCIe 6.0 is driving the next generation of compute, storage and networking innovation in data centers for high-performance computing, AI/ML and cloud. With silicon-proven PCIe IP supporting speeds from 2.5G to 64G PAM-4 designs and cutting-edge AI/ML-driven EDA tools expertise, Synopsys is enabling SoC designers to achieve the best power, performance, area and latency, while addressing reliability, power and signal integrity.

Synopsys has performed all the required work, such as package escape studies, PHY and controller placement optimization including partitioning and floorplanning, pin placement, place and route, timing closure and signoff electromigration/IR drop analysis to help companies successfully tape out large-scale SoCs with multiple PCIe 6.0 instantiations. Synopsys provides integration-friendly deliverables for PCIe PHY and controllers with Integrity and Data Encryption (IDE) security along with expert-level support which can ease PCIe 5.0/6.0 integration from design to implementation.

This article was originally published on EE Times.

About the Author

Madhumita Sanyal is a Senior Staff Technical Marketing Manager for Synopsys’ high-speed SerDes PHY IP portfolio. She has 17 years of experience in design and application of ASIC WLAN chips, logic libraries, embedded memories, and mixed-signal IP. Madhumita holds a Master of Science degree in Electrical Engineering from San Jose State University and LEAD from Stanford Graduate School of Business.

 

Subscribe to Newsletter

Leave a comment