Chiplet Strategy is Key to Addressing Compute Density Challenges

Article By : Balaji Baktha, Ventana Micro Systems

Chiplet integration can enable disaggregated server, heterogenous computing and domain-specific acceleration within data centers.

Data center workloads are quickly evolving, demanding high compute density with varying mixes of compute, memory and IO capability. This is driving architectures that are moving away from a one-size-fits-all monolithic solution to disaggregated functions that can be independently scaled for specific applications.

It is imperative to adopt the latest process nodes to deliver the needed compute density. However, doing so with traditional monolithic SoCs presents an inherent disadvantage due to escalating costs and time to market challenges resulting in unfavorable economics. To address this dilemma, chiplet-based integration strategies are emerging where compute can benefit from the most advanced process nodes, while application-specific memory and IO integrations can reside on mature trailing process nodes.

Balaji Baktha - Ventana Micro Systems
Balaji Baktha

Further, disaggregating a solution into its composable parts opens the door for an ecosystem of partners who can independently develop optimized chiplets which can then be heterogeneously mixed and matched into a variety of highly differentiated and cost-effective solutions.

The chiplet approach strikes a balanced trade-off, able to deliver a plethora of domain specific solutions from a set of composable chiplet functions over a monolithic approach. Compute chiplet(s) tend to rapidly adopt the leading-edge process nodes for the best performance, power and area. Conversely, memory and IO functionality utilize mixed-signal capabilities which benefit less from the latest node and require longer validation cycles, so a chiplet integration on a mature trailing process node is more advantageous.

Since the configuration of memory and IOs is typically workload-specific, chiplet integration on a more cost-effective node tends to be a high value and differentiated SoC development. On the other hand, the compute chiplet becomes more general, able to amortize the higher leading-edge node cost across a wider range of applications and a higher asset management opportunity. Finally, a system integrator can mix and match chiplets to address a wide range of applications and product SKUs without incurring the high cost of taping out new designs.

For a typical high-performance CPU design, these benefits result in a saving of at least $20 million per product and accelerated time to market by roughly 2 years. The cost savings come from a reduction in IP licensing, mask sets, EDA tools and development effort. The time-to-market advantage stems from a significant reduction in the complexity to integrate, verify and productized a solution versus a monolithic approach. Finally, the packaging technology required to integrate multiple chiplets has already entered the mainstream and does not significantly add risk to bring a more cost-effective product to market.

For a multi-vendor chiplet approach to become mainstream, two things need to be in place: an open and standardized die-to-die (D2D) interface is required between chiplets; an ecosystem of function-specific chiplets that can be readily integrated to address different applications. Industry leaders are currently investing resources and effort to ensure both factors will be addressed in the near future.

The Open Domain-Specific Architecture (ODSA) working group within the Open Compute Project was a natural home for the D2D standardization effort, ensuring it could be effectively leveraged within the data center and applications out to the 5G network edge. Multiple vendors are bringing their highly portable D2D Bunch-of-Wires (BoW) PHY technology to provide the electrical physical layer between chiplets. On top of the PHY layer, Ventana has created a lightweight Link Layer to transport standard interconnect protocols efficiently across the chiplet interfaces.

The merits of disaggregating solutions into versatile composable functions on chiplets is highly dependent on D2D interface attributes to achieve a good performance-power-cost trade-off. BoW is seen as a compelling solution, since it can provide very high bandwidth, low latency, low power, at a reduced cost. In addition, it has a very low circuit complexity, which enables a broader adoption across multiple customers and product lines. The initial interface configuration is targeted to deliver up to 128GB/s raw bandwidth throughput with sub-8ns latency and less than 0.5pJ/bit active power consumption.

Additionally, a rich ecosystem of partners is being formed around the standardized D2D chiplet interface. Several established vendors are working on a range of high-speed serial and processing frameworks that will support an extensive solution market. In addition to data centers, the developing partner ecosystem is focusing on other high growth market segments such as 5G infrastructure, edge compute, automotive and end client devices.

The RISC-V extensible ISA provides a solid base to bring domain-specific acceleration in conjunction with a unified software framework. This is a key rationale for founding Ventana Micro Systems. We wanted to bring RISC-V into the high-performance CPU category with data center-class processors that address the specific needs of hyper-scalers and enterprise customers. We chose to pioneer a chiplet-base approach within an ecosystem of partners to enable rapid technology adoption.

We have demonstrated that our compute chiplets can process and execute custom instructions within an integrated chiplet design. This approach provides the flexibility to support a range of solutions where customers can choose to keep their differentiating technology private on a separate chiplet or work directly with Ventana to achieve a more optimal integration.

Chiplet-based integration is a needed and well-suited approach to enable new disruptive trends such as disaggregated server, heterogenous computing and domain-specific acceleration within the data center and other high growth markets. On top of enabling rapid adoption of these emerging trends, it provides significant cost and time to market advantages over the traditional monolithic SoCs.

Standardization of the D2D interfaces within ODSA will enable a rich ecosystem to support these unique, differentiated integrations from a set of available chiplets. RISC-V ISA extensibility provides the recipe to unleash domain-specific acceleration in record time, by leveraging a production-ready compute chiplet and supporting ecosystem.

This article was originally published on EE Times.

Balaji Baktha is founder and CEO of Ventana Micro Systems.

Subscribe to Newsletter

Leave a comment