Security co-processor ties to PCI Express
Security is becoming increasingly valuable to networking, generating much recent interest in the internal architecture and operation of specialized security acceleration hardware. Unfortunately, there has been little discussion of the interconnect from security hardware to other devices, primarily because this interconnect has been based on one or two open standards that are broadly supported and not subject to differentiation.
However, the high level of churn in the semiconductor industry surrounding interconnect technologies and how they are evolving make a discussion of how security accelerator interconnect will evolve over the next few years quite timely. Specifically, we will discuss how and why the security accelerator interconnect will evolve from the dominant PCI and PCI-X of today to PCI Express technology of tomorrow.
There are two scenarios in which security accelerators are typically used: in-line and co-processor. The in-line scenario places the accelerator in the fast data path, in line with the network processor. Logically, its data path interfaces should match those used by the network processor and other silicon such as framers and traffic managers. These interfaces are defined by two existing, well-supported standards bodies: the Optical Internetworking Forum and the Network Processor Forum. The evolution of these interfaces is well-established, so it is assumed for now that in-line security accelerators will continue down this path. Note that this scenario still requires an interface to the host processor for control and management functions.
The co-processor scenario attaches the security accelerator to a general-purpose or host CPU. Security co-processors have typically used the same PCI interconnect as other types of co-processors. With high-performance host processors, this configuration can handle gigabit data rates today.
In this article, we will take a look at the co-processor scenario because it represents the largest market and, therefore, plays a dominant role in determining the security accelerator interconnect.
There are two approaches to discussing the evolution of this security accelerator interface.
The first approach is the simpler one. It is pragmatic and constraint-based. It simply recognizes that in the co-processor scenario, the ubiquitous interface used today by security accelerators is PCI (for simplicity, PCI also includes PCI-X). It leverages the following advantages of PCI: adequate performance and features; ubiquitous support in hardware and software; broad availability of expertise and tools for development and manufacturing; low cost; stability; and a broad range of complementary specifications.
Those advantages have been adequate to support the security co-processor model and have given designers tremendous flexibility in implementing many different kinds of security equipment.
Additional capabilities will be needed, however, as network bandwidth grows, the need for security becomes more pervasive, the demand for service availability grows and as additional capabilities are needed. These capabilities include higher bandwidth, lower cost and more robust reliability, availability, serviceability and manageability (RASM) features.
The migration path from PCI is clear: The PCI SIG has elected PCI Express technology as the successor to the PCI and PCI-X standards. The communications and compute industries are adopting this new standard because its value was designed from the outset to match their requirements.
A variety of factors will determine the timing of the migration. These include the need for advanced features, vendor product development schedules and the need for vendors to recoup their costs. Additional factors include end-users' life cycle expectations, system prequalification costs and the deployment of processors and chipsets with PCI Express support. This timing can be different for each vendor and customer.
Vendors are slowly migrating to PCI Express products. Initial PCI Express deployments are supported by PCI-to-PCI Express bridge chips. As a result, security co-processors will not be forced to make an artificially early migration. Instead, the migration can be driven by cost and features that are of value to the end user. As more and more processors and chipsets implement PCI Express natively, security and other co-processors will do likewise.
As for timing, we can learn from the migration of ISA to PCI. Applications determined the rate at which this migration took place. For applications that needed the extra performance or features of PCI, the migration happened relatively quickly. Applications that valued the stability of the hardware platform above all else are still not quite finished migrating away from ISA. We can expect to see a similar pattern in the migration from PCI to PCI Express.
Modularity sparks acceptance
The factor accelerating this migration is modular platforms. The modularity of computer platforms drove the ubiquity of PCI in compute platforms and, therefore, to its adoption as the security accelerator interface of choice. This modularity required a foundation of standard interconnects, and PCI was the interconnect of choice for I/O devices, peripherals and co-processors.
Since PCI Express is based upon a similar foundation of modular platforms and a standards-based interconnect, communications equipment is moving toward that technology in the same manner.
This brings us to the second approach, which is driven by the fundamental requirements of security-equipped platforms. Security accelerator requirements can be placed in four simple categories.
Simple migration--Since most accelerators use PCI today, it is important that the system migration, and in particular the software drivers, involve a minimum of effort.
Scalable bandwidth--As the performance of host processors scales in accordance with Moore's Law, the supported throughput will scale well beyond single-gigabit bandwidths. Overprovisioning of bandwidth also raises interesting possibilities, such as using the host processor's memory to store session state information. This could provide a decrease in system cost, assuming that latency can be absorbed and the requirement of FIPS 140 Level 3 can still be met.
RASM features--In high-availability systems, an interconnect that is inherently reliable or even fault-tolerant would be of value. It may also be useful to provide a separate class of service for control and management functions, such that a hardware fault cannot prevent diagnostic software from being run.
Cost--The improvement of silicon and system cost is always an advantage in improving margins and responding to competitive pricing.
PCI Express was designed to address all of these issues. One hundred percent backward compatibility of PCI Express with PCI simplifies migration. No changes are required to software drivers--the effort to migrate to PCI is no greater than migrating to a new chipset.
The narrowest PCI Express link (x1) provides 2Gbps of bandwidth in each direction simultaneously. Links can be scaled in small increments: x2, x4, x8, x12, x16 and x32. Legacy chipsets will support x4 and x8 links. The anticipated 2G physical layer will at least double the bandwidth per lane, based on a conservative assessment of current physical-layer developments.
PCI Express provides greatly expanded RASM features. These include a fundamentally reliable link layer, graceful degradation in the presence of lane failures, logging and management of faults, among others.
Finally, PCI Express enables significant cost improvements over PCI. All other things being equal, PCI Express' serial architecture provides much lower inherent costs than PCI's parallel-bus construction in terms of pin count and die area to achieve the same peak bandwidth. This includes the power and ground pins necessary to make the port work properly.
- John Beaton
Interconnect Program Manager, Embedded Intel Architecture Division
|Related Articles||Editor's Choice|
|Related Articles||Editor's Choice|