Agile Approach to SoC Design Verification

Article By : Paul Cunningham, Cadence Design Systems

Think of it as a new class of EDA tool—one engine to rule all the other engines in a full verification flow.

As agile methods are established to improve productivity and quality, interest is growing in hardware design.

Still, success in the hardware domain is generally perceived to have been limited. Reality is probably somewhat better than perception as some agility trends in hardware are not explicitly labeled as such.

For example, we see increasing efforts to decouple IP-level design and verification from SoC-level design and verification. In that case, each IP team runs asynchronously from SoC projects that operate on a “train model,” picking up whatever version of the IPs ready at the time an SoC design leaves the station.

While not branded as agile, this approach does align with an agile philosophy.

 

Click on image to enlarge.

Computing barrier

The high cost of taping out a chip design and the inability to change a design post-tape out are often cited as key reasons why agile methods do not map well to hardware design. But the inability to be agile after tape out does not necessarily imply we cannot be more agile before tape out.

One of the biggest headwinds to adopting agility in hardware design is the computational complexity of hardware verification. Testing a software program requires only the computing needed to execute that program, and of course the test runs at full speed.

Testing a hardware design requires a simulator program that mimics in software what the design will do when it is manufactured in hardware. This simulator program is very computationally expensive, and its speed of execution is thousands of times slower than the speed of the real silicon it is simulating.

Companies designing hardware are compute-limited when it comes to testing their designs. Special simulation accelerators are available from several companies supporting systems design, based either on proprietary processors designed specifically for simulation acceleration, or on FPGAs. While these systems can simulate hundreds of times faster than simulation on general purpose servers, their cost is also proportionally more expensive. Hence, design teams find themselves similarly limited in compute resource on these platforms.

Agile design requires continuous integration and testing, not just at the unit level, but also at the entire system level. If testing is computation- limited, agile design requires greater computing efficiency, especially at the system level. For example, a typical, modern SoC requires up to five days of continuous compute time on a server farm across thousands of machines to assemble and run even a basic set of full chip tests.

Against such an extreme compute backdrop, how can a design team strive to become more agile?

Parameterization, computational logistics

Two key opportunities are available to push the computing barrier in agile hardware design: reducing design size through parameterization and reducing test size through computational logistics.

First, parameterization. Replication is increasingly common in SoC design, be it IP-level replication like multicore CPUs, or architecture-level replication such as shader cores in a GPU or MAC nodes in an AI accelerator. By leveraging parameterization, the scope of replication can be significantly enhanced by enabling more similar but non-identical things to be fused together under some form of parameterization.

The more replication in a design, the greater the possibility to automatically generate cut-down configurations of a design that are smaller but still meaningful for test. The more sophisticated the use of parameterization, the more flexibility there is to minimize the size of a design used to test a particular piece of functionality at the SoC level.

Replication and parameterization are already well supported in mainstream hardware description languages (HDLs) such as System Verilog, but they can be further enabled by adopting higher level languages as HDL generators. For example, SystemC, Matlab, Python or Chisel. As with the trend to decouple IP and SoC level design, a similar trend is emerging for adoption of higher-level languages for hardware design.

As for computational logistics, if we are integrating and testing continuously under an agile design methodology, then each integration and test is incremental to the previous integration and test. For a given incremental design change, computational logistics means automatically determining the best design configuration, set of tests and test configurations for delivering good verification quality with minimal cost of compute.

Think of it as a new class of EDA tool—one engine to rule all the other engines in a full verification flow.

We see significant potential to improve verification computing efficiency through computational logistics, especially if looking forward to a heterogeneous, cloud-based future where the billing of unlimited-usage capacity across a wide range of simulation and emulation platforms is available. Just as computational logistics has transformed package throughput for shipping companies like UPS and FedEx, so too can it transform verification throughput in hardware design.

The takeaway

Hardware design is already becoming more agile, but there is much room for improvement. A key barrier to such improvement is the massive computing cost of hardware verification compared to software verification.

By leveraging replication, parameterization and higher-level languages as HDL generators we can minimize design sizes under test. By embracing computational logistics, we can minimize test workloads and further optimize design sizes under test, especially in a future that is cloud-enabled with the availability of unlimited usage-based verification computing.

This article was originally published on EE Times.

Paul Cunningham is senior vice president and general manager of the System and Verification Group at Cadence Design Systems.

Subscribe to Newsletter

Leave a comment