Circuit design optimises deep learning apps

Article By : EE Times Asia

The integer operations-based tech reduces data bit width of the compute units and the memory that records training results of deep learning.

Fujitsu Laboratories has developed a circuit technology that will improve the energy efficiency of hardware used for deep learning without changing network structures or gaining algorithms.

In the deep learning process, it is necessary to do massive calculations based on training data, but the upper limit to processing performance is determined by the volume of electricity that can be used by the servers and other hardware that carry out the learning processing, so increasing performance per watt has become an issue in accelerating deep learning processing.

Now, Fujitsu Laboratories has developed a computational algorithm-driven circuit technology that has a unique numerical representation that reduces data bit width used in computations, and based on the characteristics of deep learning's training computations, automatically controls the location of the decimal point position according to the statistical information of the distribution in order to preserve the computational accuracy sufficiently necessary for deep learning.

Fujitsu AI 01 (cr)
Figure 1: Improving calculation accuracy in the computational core (Source: Fujitsu)

In this way, in the learning process, the compute unit's bit width and the bit width of the memory that records learning results can be reduced, and energy efficiency can be enhanced, Fujitsu said.

In a simulation of deep learning hardware incorporating this technology, Fujitsu Laboratories confirmed that it significantly improved energy efficiency, by about four times that of a 32-bit compute unit, in an example of deep learning using LeNet.

The newly developed circuit technology improves energy efficiency in two ways: First is that power can be reduced by executing operations that were done in floating point as integer calculations, instead. The second way is by reducing data bit width from 32-bit to 16-bit to cut the volume of data being handled in half, meaning that the power consumption of the compute unit and memory can be reduced by about 50%. Moreover, by reducing it to 8-bit, the power consumption of the compute unit and memory can be reduced by about 75%.

Fujitsu AI 02 (cr)
Figure 2: Optimising calculation settings using statistical information (Source: Fujitsu)

With this technology, it has now become possible to expand the range of applicability for advanced AI using deep learning processing to a variety of locations, including servers in the cloud and edge servers, Fujitsu said.

Fujitsu Laboratories plans to commercialise the circuit technology as part of Human Centric AI Zinrai, Fujitsu Limited's AI technology. It will also continue to develop circuit technology in order to further reduce the data volumes used in deep learning.

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Subscribe to Newsletter

Leave a comment