SAN FRANCISCO — Try to find a technology conference or trade show where everybody is not talking about artificial intelligence. Go ahead: Try. But not at this week’s Design Automation Conference (DAC).

Dario Gil
Dario Gil

The DAC keynote on Tuesday was “AI is the new IT,” offered by Dario Gil, vice president of AI and IBM Q at IBM Research. Gil presented a helicopter’s-eye view of the technology’s current topography, identifying key areas as the industry strives to broaden AI’s turf.

Gil looked back to 2012, a pivotal year. That was when the development of a deep convolutional neural net in the ImageNet Challenge proved to be a breakthrough in visual object recognition algorithms. Dramatic increases in labeled data and compute power, along with more progress in algorithms, have fueled the deep-learning revolution further.

Many industry segments are hot for AI. One way to measure this trend is to look at the enrollment of students in introductory courses on machine learning, noted Gil. Traditionally, such classes attracted 30 to 40 students, he said. Now, more than 1,000 have signed up at Stanford and 700 at MIT.

Narrow AI
AI, as we know it today, is being applied to language translation, speech transcription, object detection, and face recognition. Gil calls this “a narrow form of AI” in which AI runs a single task in a single domain.

Nonetheless, AI is already spreading like wildfire across many industry segments. “There are hundreds of applications, and the list is quite long,” said Gil. IBM is tracking AI challenges in a spectrum of applications that range from design automation, industrial, healthcare, and visual inspection to customer care, marketing/business, IoT, and compliance.

In IC design, for example, machine learning is already used to optimize synthesis flow. Advances in AI can now “automate the decisions of skilled designers,” according to Gil.

Machine learning applied to IBM 22-nm z13 system (Source: IBM)
Machine learning applied to IBM 22-nm z13 system (Source: IBM)

A good example is when IBM developed its z and Power server microprocessor chips using a 22-nm process. Experience has taught IBM that machine learning can be effective to “automate synthesis flow parameter tuning, capture knowledge from expert designers, and learn from prior design runs.”

Using machine learning for synthesis flow optimization (Source: IBM)
Using machine learning for synthesis flow optimization (Source: IBM)>

Such efforts have shown the promises of AI. But throughout this speech, Gil cautioned, “We are just at the infancy.”

Why?

The horizon between narrow AI and broader AI (and, ultimately, general AI) is “still quite far away.”

Ultimately, Gil noted, “we must create a system that can learn and read, move automation across domains, and learn across arbitrary spaces. This still remains a very difficult problem.”

As for AI running a single task in a single domain with enough labeled data, Gil said, “We have no doubt that AI can achieve superhuman accuracy performance.” The challenge is how narrow AI can evolve into a broader form. Gil explained the dilemma: The moment you need to perform another task in another domain, you need to build a new neural network from scratch and clean it up. What the world needs, he added, is AI that can grow across tasks and domains.

Broad AI — across tasks, across domains
To broaden AI, the AI community faces several key challenges.

1) Explainable AI
First and foremost, Gil emphasized “explainable AI.”

“We must create AI that is less of a blackbox,” noted Gil. “And we should be able to have a better understanding of what’s happening in the neural networks.” He added that debuggers are needed for neural networks to spot errors.

The black-box approach might be OK for an AI system that recommends books to read. “But total black boxes are mostly unacceptable to so many professions in so many areas,” stressed Gil. “This is foundational” for progress in AI. When people are making high-stakes decisions involving hefty investments, with safety as a key factor, black-box AI is a blind alley.

2) AI is fragile
Gil said, “While it’s impressive what neural networks can do, AI is very fragile.” By injecting noise into a system, you can fool it. Confused by noise, AI might mistake a bus for a giraffe. This sounds hilarious, said Gil, but it’s serious business when one giraffe can wipe out a billion-dollar investment. Furthermore, one intrusion can poison a neural network. Given that neural networks are subject to all sorts of attacks, impenetrable security is critical, he explained.

3) Ethics in AI
“Ethics: Boy, this is a big, big topic.” noted Gil. “Before we even talk about the notion of super intelligence, I’ll tell you an area where I am really preoccupied. That’s bias. Introduction of bias in neural networks.”

In creating a system that trains by example, the examples themselves can introduce biases derived from social conventions. Gil cited credit decisions. A system that has learned from past examples might say, “Don’t give credit to minorities, or don’t give credit to women.” Gil said, “How do we verify that the examples the system uses are ‘bias-free?’ How do we inspect?”

4) Learns from small data?
In training data, examples are common currency. For AI to evolve, the next step is to figure out how to learn more from less data, stressed Gil. AI should be able to leverage “prior knowledge” and transfer its learning and its “weight” from one neural network to other networks in other areas, he explained. AI combines learning and reasoning. We’ve made great progress in learning, but reasoning? “Less so,” noted Gil. In short, AI accumulates knowledge but also must be able to apply reason to that knowledge.

5) AI infrastructure
The industry must continue to build infrastructure. AI advancements have been enabled by increasing compute power. A recent “hardware renaissance” has given birth to new architectures. More creativity has opened a “wonderful roadmap” ahead of us, noted Gil. “Thanks to specialized workloads like deep learning,” AI has seen tremendous progress. But Gil emphasized that the industry must continue to develop an AI infrastructure.

The path to broad AI (Source: IBM)
The path to broad AI (Source: IBM)

General AI
Unlike broader AI, IBM Research doesn’t expect general AI to arrive until after 2050. Gil said that when scientists throw out a number like 2050, what they’re saying really is, “We have no idea.”

The evolution of AI
The evolution of AI (Source: IBM)

But the general form of AI is still on the agenda. The research community is intent on figuring out issues of AI comprehension.

Certainly, machines have already proven that they can lick humans in games like chess and backgammon, where rules prevail a well-defined environment. IBM researchers today wonder, however, how the machine mind fares in a non-binary environment that produces no black-and-white winner. With this problem in mind, an IBM Research team in Israel launched “IBM Debater.”

Last week, in an event here in San Francisco, IBM demonstrated IBM Debater, which — some said — “just about held its own against a skilled human debater” on the topic of whether government support for certain programs, such as subsidizing music education, is a good thing.

Here’s a clip provided by Gil. Nobody dares suggest that AI has nailed the art of debate, but the video glimpses a possible future in which yelling at your TV will no longer be a one-way argument.

— Junko Yoshida, Chief International Correspondent, EE Times