ZeroPoint Aims to Raise DRAM Bandwidth ZeroPoints Ziptilion-BW IP can raise an applications performance, lower its power, and increase the apparent memory size. But doing so effectively depends on data patterns. Bryon Moyer System designers need endlessly higher performance, lower power, and access to data. ZeroPoints lossless compression technology attempts to improve all three, raising […]
System designers need endlessly higher performance, lower power, and access to data. ZeroPoints lossless compression technology attempts to improve all three, raising effective DRAM bandwidth; by requiring fewer memory accesses, performance rises and system power drops.
Available for licensing now, the Swedish startups Ziptilion-BW compression intellectual property (IP) employs a collection of entropy-based compression algorithms to shrink cache lines when writing back to memory; the best algorithm depends on the data pattern. Memory reads retrieve multiple compressed cache lines; requests with high locality allow a single read to satisfy what would otherwise require multiple reads.
Storage compression algorithms are well known, but they work best on large data blocks. Compressing main-memory data requires efficiency at the cache-line scale along with low latency to maintain computing performance. Techniques that meet those needs with greater than a 10% bandwidth improvement are scarce.
ZeroPoints technology originated from research at Chalmers University of Technology in Gothenburg, Sweden. Cofounder and Chief Science Officer Per Stenstr枚m remains a professor there; cofounder and co-researcher Angelos Arelakis is the CTO. CEO Klas Moreau has been CEO or board member for a few small Swedish tech companies. Since its 2017 founding, the company has received cash infusions totaling roughly $7.5 million. Earlier investments came from Chalmers Ventures, the schools investment arm. The company has one lead customer with a smartphone SoC nearing tapeout.
Silicon area starts at 0.4mm2 per instance in a 5nm process, requiring one instance per DRAM channel. Simulated benchmark results show compaction ratios as high as 4.0, including stored metadata. Memory bandwidth can more than double, and effective performance as measured by instructions per clock (IPC) can increase by more than 50%but averages 5%. Gains depend strongly on the application, suggesting data patterns will be important for deciding whether to include compression.