PARIS — Intel Corp. built its formidable reputation as a hardware company, impressing the market with the speed and power of its processor architecture and delivering finer geometry nodes that allow the company to take advantage of Moore’s Law.

This is a tried-and-true checklist that the industry typically uses to size up a CPU company.

But if the world is, indeed, moving to embrace, apply, and implement more and more artificial intelligence (AI)-based algorithms in processing data, the yardstick for the success of processors and those who develop them will inevitably change.

At least one expert, Google’s platform architect Sheng Li, who was previously a researcher at Intel Labs, is now saying that the abstraction layer that used to separate software from hardware architecture has begun to collapse in the world of AI.

If so, hardware performance won’t be the only consideration in judging a company’s AI strategy. More importantly, is the company offering software (AI)-aware hardware, and does it provide software that’s very cognizant of different types of hardware?

AI is “bringing [to the industry] a new paradigm,” Keven Krewell, principal analyst at Tirias Research, told us. It is “changing a whole computer system.” CPUs will need “a learning process,” he said, or “a machine-learning roadmap.”

Remi El-Ouazzne
Remi El-Ouazzne

In a recent phone interview with EE Times, Remi El-Ouazzane, chief operating officer of Intel’s AI Products Group, spent little time pitching Intel’s specific hardware architecture — such as Myriad X.

Myriad X, unveiled a year ago by Intel’s Movidius group, is a vision processing unit with a dedicated neural compute engine for accelerating deep-learning inferences at the edge. Its ability to deliver more than 4 TOPS is impressive.

But during our discussion, El-Ouazzane passed quickly over Movidius’ latest VPU. Instead, he stayed “on message” with Intel’s AI work, including nGraph, a framework-independent deep neural network (DNN) model compiler, and a new toolkit called “OpenVINO” (Open Visual Inference & Neural Network Optimization) designed for application developers.

Intel has good reason to emphasize the significance of its software offerings.

In the last several years, gunning for the AI market, Intel has acquired four companies: Nervana, Movidius, MobileEye, and Altera. Intel now has a broad AI hardware portfolio ranging from CPUs and GPUs to VPUs (Movidius) and FPGAs (Altera).

Intel pitches this AI portfolio diversity as its strength. El-Ouazzane noted during the interview, “At Intel, we’ve concluded that there are no one-size-fits-all solutions for AI.”

While that may be true, this diversity won’t be turned into gold unless Intel develops a software strategy that unifies all of its hardware offerings and helps customers choose and implement what they need.

In El-Ouazzane’s mind, that’s where nGraph and OpenVINO come in.

nGraph
For instance, nGraph is a “framework-neutral” DNN model compiler. By using the nGraph compiler, data scientists can bring their favorite deep-learning framework with them, compile, and run it on the most optimized deep-learning compute device. In other words, Intel designed nGraph to offer “framework-abstraction,” said El-Ouazzane.

Intel nGraph framework

Intel’s nGraph (Source: Intel)

Presumably, such a compiler lets data scientists create deep-learning models without having to think about how that model must be adjusted across different frameworks.

OpenVINO
With OpenVINO, Intel goes a step further. El-Ouazzane describes it as a toolkit to address “application domains.” The goal of OpenVINO is to help clients develop computer vision apps much faster. Based on convolutional neural networks (CNNs), the toolkit extends workloads across Intel hardware and maximizes performance.

Intel OpenVINO

Intel’s OpenVINO (Source: Intel)

Those customers may be developing drones, video surveillance systems, or robotics. By leveraging OpenVINO, they can promptly develop CNN-based deep-learning inference on the edge.

OpenVINO offers support for “heterogeneous execution” across computer vision accelerators—CPU, GPU, Intel Movidius Neural Compute Stick, and FPGA — using a common API. Specifically, OpenVINO accelerates time-to-market for Intel customers, offering a library of functions and already optimized kernels. Furthermore, the OpenVINO platform includes “optimized calls for OpenCV and OpenVX,” according to Intel.

In the long run, winning the AI market is all about “scaling” the business, according to El-Ouazzane. Success depends on the company’s ability to support as many AI options as possible while helping many more customers venturing into the AI market.

It’s possible to sell a specific hardware to a specific AI application, but that’s a business model that’s hard to scale.

Intel intends to support more customers in diversified AI applications but with less hand-holding. Intel sees the key in tools like nGraph and OpenVINO that can address higher-level abstractions.

El-Ouazzane observed that customers are entering the AI market via a different stack. In AI’s early days, for example, system vendors like DJI, China’s drone company, or HiKvision, a leading surveillance system supplier, were eagerly writing AI applications directly to lower-level hardware — based on specific architecture like that of Movidius’ Myriad VPU. For HiKvision, known for its expertise in advanced visual analytics, improving the accuracy of its own deep-learning model, tweaking its algorithm, and “writing to the metal” was obviously very important. El-Ouazzane put these early AI adopters in the first bracket.

But as AI applications proliferate, more developers have entered the market. In El-Ouazzane’s mind, they represent a second bracket of data scientists who need to take advantage of a framework abstraction like nGraph.

The third bracket consists essentially of application developers working on computer vision, neural-network inference, and deploying deep learning. Their goal is to accelerate their solutions across multiple platforms including CPU, GPU, VPU, and FPGA. Intel’s OpenVINO is intended for them.

Among all three categories, the first group demands the highest level of support, he acknowledged. The intensity level gets much lower for the third group as the OpenVINO platform offers them a library of functions and pre-optimized kernels. OpenVINO, El-Ouazzane said, is the “ultimate weapon” [for Intel] to scale the company’s AI business.

Intel OpenVINO Toolkit

Intel describes its OpenVINO toolkit as a “write-once, scale-to-diverse” platform. (Souce: Intel)

A widening gap
In discussing how AI is changing the way that hardware and software are developed, David Atienza, associate professor at École polytechnique fédérale de Lausanne (EPFL), who recently talked to EE Times in a separate interview, shared his concerns about two decoupled communities — one focused on the huge volume of software addressing various aspects [of AI] and another on a hardware level still based on a very traditional architecture.

He doesn’t believe that this is sustainable. Atienza asserted that AI is creating a huge gap between software and hardware architecture.

A traditional computing platform provides a clear abstraction layer that separates software from hardware architecture. So it was OK to have both sides — hardware and software — work independently.

That model no longer applies to the AI era in Atienza’s view. As Google’s Li observed, “There won’t be pure software or pure architecture.” There will be “architecture-driven apps” and “application-aware architecture,” he said.

In schools like EPFL, students are no longer trained to do just hardware or software. They are educated in both, and processor companies in the commercial world must catch up. AI companies need to do “full stack,” said Atienza. In his book, “Intel is one of those who are doing full stack.”

— Junko Yoshida, Chief International Correspondent, EE Times