Tachyum Unveils Universal Processor

Article By : Gary Hilson

Tachyum's latest universal processor, Prodigy, combines the functionality of a CPU and GPU into a single architecture, delivering better performance, power, and total cost of ownership for cloud computing, AI, and HPC environments.

Picking the right processor for the job may soon be a simpler decision for many design engineers, even in high–performance computing (HPC), data analytics, 5G network processing, and artificial–intelligence and machine–learning operations.

Rather than deciding to implement a CPU or GPU, a monolithic device may now be an option in the form of Prodigy, a universal processor developed by Tachyum. Prodigy combines the functionality of a CPU and GPU into a single architecture. In a briefing with EE Times, company founder and CEO Rado Danilak said this single device has the potential to deliver better performance, power, and total cost of ownership for cloud computing, AI, and HPC environments.

Expected to begin sampling by the end of the year and in volume production in the first half of 2023, Prodigy is comprised of 128 high–performance unified cores running up to 5.7 GHz, with rack solutions for both air–cooled and liquid–cooled data centers. Tachyum took the CPU model and extended the vector processors to handle a supercomputing workload and then modified them to handle AI data types and matrices. “Every core is faster than any Intel/AMD core, and overall chip–to–chip is about 4× higher,” Danilak said.

One of the most critical pain points for data centers is power consumption, which could limit expansion. Today, they consume about 4% of the planet’s power and create 50% more global emissions than the entire airline industry.

“If nothing changes in current trends, data centers will consume 40% of power electricity by 2040,” Danilak said.

Despite this high power consumption, many servers are grossly underutilized, Danilak added, citing Facebook research that found average utilization was less than 50% per 24 hours. This low server utilization costs billions of dollars per year.

“If they start using universal processors instead of turning off the servers in the night when people are not using them, they can use them for AI, and they can get 10× more AI without buying a single GPU,” Danilak said.

All this is happening as the performance increase of processors has slowed down and Moore’s law no longer seems to hold with process shrinks. In addition, wires are slower even when the transistors are faster, and that means wire delays are now limiting the performance of functional blocks.

Slowing improvements have led to overprovisioning and hence increased power consumption, which is why Tachuym decided to look at the problem from an electrical and physics perspective.

The performance increase of processors has slowed down and Moore’s law no longer holds with process shrinks. (Source: Tachyum) (Click image to enlarge)

Ultimately, Prodigy addresses the power and performance challenges by opting not to move data across the wires, which is the fundamental issue causing the slowdown. “Not only do we gain the speed, but we also save power,” Danilak said. This allows Prodigy to do more with fewer resources.

Tachyum’s universal processor eschews the transfer of data over wires, as they are becoming slower with process shrinks even when the transistors are faster, because wire delays are now limiting the performance of functional blocks. (Source: Tachyum) (Click image to enlarge)

Danilak emphasized that Prodigy is not an AI accelerator but a CPU replacement that is well–suited for AI. The company announced at the beginning of June that it is building a limited quantity of Tachyum Prodigy Evaluation Platforms later this year, featuring fully functional Prodigy processors with memory and application software for qualified customers and partners.

The evaluation platform provides a high–performance server in a standard 2U air–cooled form factor that enables customers across a broad range of market segments to test and evaluate the universal processor.

This article was originally published on EE Times.

Gary Hilson is a general contributing editor to EE Times with a focus on memory and flash technologies.

 

Subscribe to Newsletter

Leave a comment