SAN JOSE, Calif. — Advanced Micro Devices is gearing up to join a race to accelerate deep-learning jobs in client and embedded systems. However, AMD is not yet ready to provide any specifics on the 7-nm x86 and GPU chips that it aims to deliver over the next year — or its roadmap beyond 7 nm.

“There is a need for high performance with what we call the edge [of the network] … closer to the source where data is coming in and [needing] to be analyzed — often in real time,” said Mark Papermaster, AMD’s chief technology officer, in an interview. “AMD’s machine-learning strategy is holistic and provides engines of AI for both the data center and the edge.”

In late 2016, AMD released its first GPU accelerators for deep learning in the data center. Since then, Google’s TensorFlow Processing Unit and other designs have shown the advantages of adding arrays of multiply-accumulate units (MACs) in hardware to speed deep-learning algorithms.

In May 2017, graphics rival Nvidia rolled out Volta, its first GPU with embedded MACs that it called tensor cores. AMD’s CPU rival, Intel, said earlier this year that it plans to move its Movidius accelerator to PC motherboards running Windows ML. Analysts believe that Intel will embed Movidius-inspired cores into its PC processors eventually.

Papermaster would not say whether AMD plans to add MAC arrays to the 7-nm Vega GPU that it will launch later this year or the Zen-2 x86 processors that it will release early next year. However, he did say that Vega will support additional formats beyond the 16-bit floating point that the company’s GPUs support today.

A lively debate rages over ways to simplify neural networks to speed deep-learning jobs. Arm will support 8-bit operations in its ML Core, Nvidia has done research on 2-bit operations, and Imec is researching a single-bit alternative.

So far, AMD’s public roadmap only extends to use of a second-generation 7-nm process with extreme ultraviolet lithography for its Zen 3 products, probably starting in 2020. Papermaster declined to comment on what lies beyond.

In an interview earlier this year, the chief executive of Globalfoundries, AMD’s main foundry partner for x86 chips, downplayed talk of a 5-nm node, suggesting that the company is seeking investors for a 3-nm fab. TSMC, which traditionally makes AMD’s graphics chips, is ramping up both 5-nm and 3-nm processes, and rival Samsung is even filling the gaps in between those nodes.

Foundries “are advancing performance, performance per watt, and density, but it’s slowing versus the traditional Moore’s Law pace that the industry has become accustomed to … we will take advantage of full-node transitions as they come … in addition, we have a heterogeneous approach of using CPU, GPU, and other cores,” said Papermaster.

Meanwhile, AMD is still waiting on so-called 2.1D packaging options such as versions of the wafer-level fan-out packages used in smartphones suitable for high-performance PC and server chips. Last year, Papermaster said that such options are two to three years away. This year, they don’t seem much closer.

“We see OSATS push new techniques,” he said. “2.5D is proven … and experiments with other multilayer stacking techniques will bear fruit over time … I think we will see [alternatives to 2.5D packaging] in the next several years.”

AMD beat rival Nvidia in 2015 by releasing Fiji, a graphics processor and memory stack on a 2.5D package. But the technique is relatively expensive and, so far, has not been suitable for mainstream users such as PC gamers.

Separately, Papermaster offered no update on the progress of AMD’s joint venture partner, Tianjin Haiguang Advanced Technology Investment Co., said to be designing Zen-based x86 processors for the China market at AMD’s site in Austin. Papermaster was optimistic about the JV’s future despite the fact that versions of Arm, Power, and x86 processors designed by China companies have not gotten market traction there yet.

“Just as the AMD Ryzen and Epyc has had uptake, I suspect our JV partner should see ready adoption of its x86,” he said. “There’s a massive installed base, so the barrier to adoption is low.”

— Rick Merritt, Silicon Valley Bureau Chief, EE Times