Apply formal methods to power-aware verification
The design team must verify not only that the inserted power management circuitry functions correctly, but also that it does not corrupt the chip's functionality. Ideally, the team would have power estimates early enough in the design flow to deploy and verify the appropriate reduction techniques, minimising or even completely avoiding late-stage redesign. Generally, though, really accurate power estimates become available only at physical layout, where design changes—even small ones—ripple back through design flow. Consequently, power-aware design often requires iterative optimisation up and down the flow. Moreover, because many optimisations are performed late in the design flow, verification effort and risk increase, and debug becomes even more tedious and time-consuming. Consequently, power-aware design can appreciably increase design and verification time and cost. The challenge is to achieve the target power consumption while limiting the cost of doing so.
Our ultimate objective is to verify not only the chip's functionality, but also that we have completely and correctly implemented the power intent described in Unified Power Format (UPF)  or Common Power Format (CPF)  descriptions. This article concludes that an "apps" approach is the best way to apply formal methods to power-aware verification.
We first address the challenges in devising and verifying the power management scheme and power optimisations necessary to achieve this ultimate objective. Firstly, how is the power management scheme devised?
Power management scheme
Initially, the system architects devise an implementation-independent functional specification to meet the product requirements, and then partition the functionality across hardware and software. They may also devise an initial power management scheme at this architectural level. However, the architectural scheme defines only which functionality may or should be deactivated for any given use case, not how it should be deactivated.
Consequently, how it should be deactivated is decided at the RTL implementation stage by the RTL team, that is, by a different group of engineers. The team decides whether and how to use a multiplicity of power reduction methods such as power domains and clock gating to achieve the requisite low power characteristics. These decisions must comprehend both hardware- and software-controlled circuit activity. They then implement the functional hardware, making extensive use and reuse of pre-designed silicon intellectual property (IP) blocks, together with new RTL blocks, some of which are under the control of software stacks.
However, although the RTL team's decisions may be informed by a physical floorplan, they do not (and cannot) comprehend the final partitioning at place and route (P&R), where accurate power estimates are made. A common outcome is that the power management scheme must be modified and re-verified after P&R.
Clearly, the power management scheme is a moving target, and requires design optimisation, verification and re-verification at every stage of the design flow—architecture, RTL implementation and physical design. However, implementing any scheme is often subject to significant constraints, such as the impact on IP use and re-use and on the use of design-for-test (DFT) circuitry.
The impact on IP use and reuse
In an ideal power reduction world, there would be a dedicated IP block for any particular chip function. The power management scheme could then be implemented on an IP block basis. For instance, "switch off video streaming" is implemented simply by switching off the associated video processing and control blocks.
However, in the real world, a given IP block may implement several functions, so switching off the block would disable more than the one function that should be disabled. Therefore, the team must devise a means of switching off only part of the IP block, for example by adding interfaces to the power-control registers or signals. This can be problematic in the case of third-party IP, where the team may have only black box information about its behaviour. In any case, the verification challenge now includes re-verifying the redesigned IP block(s) as well as verifying the power management circuitry.
Effects of DFT circuitry
Design-for-test circuitry (DFT) presents an additional complication. Conventional DFT assumes that the whole chip design operates with all functions up-and-running in order to minimise test pattern count and test time. That is how it operates not only on the tester, but also in field diagnostics. With power-aware design, DFT circuitry must now mesh with the design's power management scheme in order to avoid excessive power consumption and unnecessary yield loss at final test.
Power-aware design requirements
The five principal design requirements for implementing and verifying a low power scheme (figure 1) are:
1. Sufficiently accurate power estimations using representative waveforms, both pre- and post-route.
2. Accurate analysis and visibility of the white box behaviour of third-party IP prior to its modification and reuse.
3. The deployment and ongoing optimisation of appropriate power reduction techniques, both pre- and post-integration.
4. Exhaustive functional verification at the architectural and RTL levels, both before and after the deployment of power optimisation circuitry.
5. Verification of hardware functionality compliance with software control sequences.
Figure 1: Power-aware design requirements.
The first two requirements can be addressed using commercially-available tools that use simulation, formal methods and behavioural indexing. But how do we tackle the remaining three requirements?
Ongoing optimisation and verification
As previously indicated, the power management scheme starts at the architectural level, so any available architectural features such as communication protocols must first be verified.
|Related Articles||Editor's Choice|