The recently announced UCIe 1.0 specification provides a complete standardized die–to–die interconnect with physical layer, protocol stack, software model, and compliance testing that will enable end users to combine components from a multi–vendor ecosystem for SoC construction.
The recently announced Universal Chiplet Interconnect Express (UCIe) 1.0 specification covers the die-to-die I/O physical layer, die–to–die protocols, and a software stack model leveraging PCI Express (PCIe) and Compute Express Link (CXL) industry standards.
It’s fair to say that UCIe is a long time coming. Chiplets aren’t new, but recent uptick in interest in the technology has raised concerns about the need for a formal standard and best practices.
UCIe has garnered a lot of interest in recent years because of its tried–and–true nature and its ability to help semiconductor companies solve common problems faced today. Chiplets offer an approach to semiconductor design and integration that hold the promise of speeding things up with Moore’s Law, which is now nearly six decades old. The pace of semiconductor manufacturing advancement has also been waning as of late.
Chiplets offer the potential to return to the two–year doubling cycle that has been the economic foundation of the semiconductor business since 1965. They replace a single silicon die with multiple smaller dies in a unified packaged solution, which allows for more silicon to add transistors.
“A lot of the companies are hitting against the critical limit in their design as the demand for processing continues to be insatiable,” said UCIe chair and Intel senior fellow Debendra Das Sharma. “So different companies are putting together their own chiplet connected through their own proprietary mechanism, effectively offering a scale–up solution.”
Aside from the benefit of being able to shrink and increase yield at the same time, chiplets are appealing because they can be built using well–understood and proven components and techniques, which reduces the likelihood of failure due in part to advances in testing and packaging. Another benefit of chiplets is they enable companies to stitch together dies from other vendors, which allows them to focus on their strengths when building a device.
Chiplets also offer best performance for the price because it’s not always necessary to move to the next process node. A chiplet could include one part of a die that’s done at 60 nanometers (nm) and another at 28 nm, allowing for both flexibility and reliability.
The added flexibility provided by chiplets, however, means companies are approaching chiplet design differently. Prior to the introduction of the UCIe 1.0 standard, the Open Compute Project (OCP) was in the process of pulling together best practices through the OCP Open Domain–Specific Architecture subproject to establish commonly used processes that go into putting chiplets together.
Computer hardware manufacturer zGlue is another example of a company that’s looking to bring clarity to the chiplet ecosystem. It offers a platform and process for building custom chips on demand to help hardware vendors respond to increasingly intense time–to–market pressures.
The goal of the UCIe 1.0 specification is similar: align the semiconductor industry around an open platform to enable chiplet–based solutions to create an open chiplet ecosystem that supports heterogeneous integration, thereby maintaining the flexibility to mix–and–match chiplets from different process nodes, fabs, and vendors.
“Heterogeneous chiplet integration is needed to get a lot of the economies of scale,” Das Sharma said. “It reduces your time–to–market by reusing existing chiplets.”
The UCIe 1.0 specification was ratified to provide a complete standardized die–to–die interconnect with physical layer, protocol stack, software model, and compliance testing that will enable end users to combine components from a multi–vendor ecosystem for system–on–Chip (SoC) construction. “This is going to be a game changer in the entire industry,” Das Sharma said. “This is how people are going to be building their SoCs.”
Das Sharma went on to explain the goal of the UCIe consortium is to ensure the UCIe 1.0 standard offers compelling power, performance, and cost characteristics. “We want to be able to transfer a lot of bandwidth in a very power efficient manner. You can build something that is going to deliver a lot of bandwidth with very low latency, in a cost–effective manner, with low power.”
Interoperability is also essential, with clarity on how things are going to work. “We want to make sure that we’re defining the full stack. If we want it to be plug and play, we want to leverage existing software, because we don’t want to go and reinvent the wheel.”
Among the vendors already participating in the group that’s managing the UCIe are AMD, Google, Meta, Microsoft, Samsung, and TSMC. Intel is playing a key role by “donating” the initial specification.
The CXL/PCIe standards were selected as the protocols because they are board–to–board interfaces and can address common use cases. PCIe/CXL.io handle I/O attach, CXL.mem handles memory use cases, and CXL.cache handles accelerator use cases. Similar to both PCIe and CXL, UCIe is focused on interoperability even as it evolves. Das Sharma said other protocols will be considered for future iterations as well as advanced chiplet form–factors and chiplet management.
Intel sees UCIe as a critical component of its IDM 2.0 strategy, according to Kurt Lender, IO Technology Solution Team Strategist in the company’s datacenter and AI group. This is because the specification builds on Intel’s open advanced interface bus standard and supports the ability use the right chiplet for the job, regardless of who makes it, Lender wrote in a recent blog post.
“It’s a new era of semiconductor architecture that puts designers in control and continues Moore’s vision of doubling computing power well into the foreseeable future.”
This article was originally published on EE Times.
Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.