The technology reduces the volume of internal GPU memory used by over 40%.
Fujitsu has introduced a new technology that it says will streamline the internal memory of GPUs, resulting in heightened machine learning accuracy.
Recent years have seen a focus on technologies that use GPUs for high-speed machine learning to support the huge volume of calculations necessary for deep learning processing. But to make use of a GPU's high-speed calculation ability, the data to be used in a series of calculations needs to be stored in the GPU's internal memory, which, in turn, creates an issue where the scale of the neural network that could be built is limited by memory capacity.
Fujitsu Laboratories has been working on a technology to improve memory efficiency, implementing and evaluating it in the Caffe open source deep learning framework software. When implemented, the technology analyses the structure of the neural network, and optimises the order of calculations and allocation of data to memory, so that memory space can be efficiently reused.
With AlexNet and VGGNet, image-recognition neural networks widely used in research, this technology increases the scale of learning of a neural network by up to roughly two times that of previous technology, thereby reducing the volume of internal GPU memory used by over 40%. This technology makes it possible to expand the scale of a neural network that can be learned at high speed on one GPU, enabling the development of more accurate models.
Figure 1: Technology to improve memory efficiency (Source: Fujitsu). Click here to view bigger image.
Fujitsu plans to commercialise the technology as part of Fujitsu Limited's AI technology, Human Centric AI Zinrai, by March 31, 2017.