The increasingly widespread use of FPGAs is exposing the devices to possible attacks, bringing to the fore the issue of security—both the protection of intellectual property and the protection of sensitive data.
The large-scale introduction of programmable digital devices has enabled the implementation of embedded systems with an ever-increasing number of advanced features. Field-programmable gate arrays have achieved performance that was once attainable only through dedicated hardware designs. Compared with a custom hardware system, however, an FPGA offers benefits such as high flexibility and high configurability, meeting the requirements of a wide range of applications. The trend toward the progressive adoption of highly integrated programmable devices is confirmed by the market availability of SoCs that integrate one or more core processors with an FPGA module; an example is the Xilinx Zynq (Figure 1).
The increasingly widespread use of FPGAs is exposing the devices to possible attacks, bringing to the fore the issue of security — both the protection of intellectual property and the protection of sensitive data. A malicious actor who acquired an FPGA’s contents would come into possession of proprietary information and algorithms, with the possibility of consequent economic and image damage for the company. Furthermore, the FPGA configuration could be altered for illegal or fraudulent purposes.
Below, we analyze the main vulnerabilities to which an FPGA can be exposed and present techniques adopted by programmable device manufacturers as anti-tampering measures.
Types of vulnerabilities
There are several ways in which the security and data integrity of an FPGA can be compromised, including:
• Reverse engineering. This technique was initially used on the first ASICs and ICs to trace the internal layout and interconnections by running a subsequent scan of the layers obtained through a destructive analysis of the device. In the case of FPGAs, however, the analysis is non-destructive and aims to intercept and decode the configuration bitstream, normally residing on a flash memory and transferred to the SRAM of the FPGA during boot. Even if the configuration bitstream is not transferred in clear text, it is potentially exposed to attacks carried out from outside using automatic analysis tools. Because the bitstream contains all the information necessary to reconstruct the FPGA configuration, its content must be protected in all possible ways.
• Side-channel attack (SCA). This involves the monitoring and acquisition of combinations of input/output signals, detecting the device temperature to highlight the hottest chip areas (those crossed by current flows), and measuring the power consumption and electromagnetic radiation during operation, up to the inspection of individual memory cells or transistors. The information thus obtained can be used to compromise the security of the device or exploit it for other types of attacks. The SCA uses statistical models, such as differential analysis and correlation analysis, performed on the physical quantities monitored during FPGA operation. The current consumption, for example, reveals peaks during the execution of the iterations of the cryptographic algorithms, and those peaks depend on both the data processed and the instructions executed. Appropriate mathematical models can then find the correlation between the variation in absorption and the hardware operation performed, revealing the value of the decoding key.
• Thermal laser stimulation (TLS). Commonly used for fault analysis, this technique can also be used to locate and read the contents of the memory of a chip, with the aim of stealing secret information such as the key for bitstream decoding. It has not yet been proven that this attack technique is applicable to modern integrated circuits equipped with countermeasures against SCAs. TLS attacks require very long execution times (on the order of a few hours) and expensive equipment (a professional microscope for failure analysis, necessary for this type of attack, can cost up to US$1 million). But because the attack
can be conducted even when the component is not powered, manufacturers of programmable logic devices cannot afford to ignore this category of attacks.
• Bitstream alteration. By modifying the bitstream contents during the transfer from the external PROM to the on-chip SRAM, it is possible to affect the FPGA behavior.
• Readback attack. Many FPGAs allow you to reread the content of the bitstream through the JTAG programming and debugging interface. While there are built-in mechanisms to prevent this attack, there is always the option to bypass security.
Over time, FPGA manufacturers have introduced countermeasures aimed at preventing or blocking these attacks while preserving the security of the programmable device.
The first commonly adopted technique targets the bitstream encryption, whose decoding is then performed at the hardware level by suitable functional blocks integrated in the chip. One of the most
widely used encryption standards is the Advanced Encryption Standard. Based on the Rijndael algorithm, AES uses symmetric keys (the same key is used for both encryption and decryption) at 128, 192, and 256 bits. During the programming phase, the key is generated and stored in the FPGA inside a battery-backup RAM (BBRAM), a random-access memory whose content is preserved by a buffer battery (its duration is equal to a few tens of years). During the boot phase, an integrated AES engine receives the bitstream, decrypts it, and passes the configuration data to the configuration logic. Because the configuration data is not visible in clear text on any of the I/O ports, it is not possible to intercept the bitstream from the outside.
An alternative technique to bitstream encryption is to store the FPGA configuration inside the chip itself, loading it at boot using only internal communication buses. Figure 2 shows the programming block diagram of a Lattice XP FPGA. The configuration bitstream can be loaded into the SRAM from the internal flash; this operation not only is more secure but is faster, taking a few microseconds. It is also possible to load a bitstream from the outside via the JTAG1532 interface (which is also used to program the flash memory), though this operation requires a much longer time to complete (a few seconds).
On some FPGA models, the decoding key can also be stored in a one-time programmable memory during the programming phase, thereby increasing the degree of protection from external attacks.
The key-obfuscation technique adopted by Xilinx on the UltraScale series is another security enforcement. As shown in Figure 3, the method uses the AES-GCM algorithm to obtain the obfuscated key by combining a family key (known only to Xilinx) with a key chosen by the user. The key thus obtained is stored in the FPGA. The bitstream is encrypted and stored in the external flash using AES-GCM and the user key. For bitstream decryption during boot, the reverse procedure applies: The obfuscated key and family key are combined to obtain the required user key.
To counter attacks aimed at injecting incorrect bitstreams, many FPGAs implement mechanisms that refuse to load bitstreams encoded with incorrect keys and erase the configuration if its checksum is incorrect.
To increase the protection from SCAs, many FPGAs monitor and count any occurrence of bad bitstream decoding. Once a certain programmable threshold is reached, the memorized key is erased, and the device must be reprogrammed.
While FPGAs may appear to be highly vulnerable to attacks to capture the key and decode the bitstream, the manufacturers of these components have taken sufficient steps to allow users to implement effective countermeasures. The different types of FPGAs available on the market today offer the necessary levels of safety compatible with any type of application.
This article was originally published on EE Times.