Bringing Common Sense to ‘Brittle’ AI Algorithms

Article By : George Leopold

A DARPA effort looks to mimic the learning processes of infants to develop more general machine learning models.

The ongoing recalibration of AI research and development underscores a fundamental tenant of machine learning: We must learn to crawl before we can walk.

Thus far, AI hype has mostly talked the talk rather than walking the walk. Returning to what appear to be engineering first principles, U.S. research efforts are attempting to move beyond current “brittle” AI models that excel at only specific tasks. The goal is developing more generalize models that can adapt much like humans do in new situations.

Among those efforts is a Machine Common Sense program overseen by the Defense Advanced Research Projects Agency (DARPA) that seeks to imbue machine learning models with the kinds of commonplace reasoning displayed by among the fastest learners on the planet: infants.

“One of the challenges of state-of-the-art AI, or machine learning, is that it tends to be very narrow, so it’s focused on a particular task and doesn’t generalize very well,” said Matt Turek, a program manager in DARPA’s Information Innovation Office.

Along with AI researchers, DARPA has enlisted child behavioral psychologists to map and encode “the common sense that’s inspired by infants,” Turek said. “Children aged zero to 18 months are probably some of the best learners in the world. They explore more and, in some ways, take more risks than adults.”

The resulting common-sense AI algorithms would infuse machine learning models with a more general understanding of objects, places, relationships and other properties needed for AI reasoning.

DARPA’s common-sense approach seeks to move beyond current narrow AI systems by “learning these common-sense facts, applying them in new situations and being much more flexible and adaptable about our learning process,” Turek said. “Those are critical to having those more robust, more general systems.”

(Source: DARPA) (Click in image to enlarge.)

The research effort also seeks to develop broader repositories of knowledge and reasoning techniques that would allow machine learning models to adapt to different problems in ways that humans do through experience.

To that end, the four-year effort is compiling large repositories of common-sense knowledge based on large, curated datasets and accompanying reasoning capabilities. The program is also making greater use of synthetic training data generated by simulations.

One output would be knowledge graphs, networks of semantic data that represent facts about a specific object and how it relates to other objects. That network of semantic data, or real-world situations, objects and concepts along with the relationships among them, would then be scaled in AI systems. Turek added in an interview that the research agency wants to apply techniques like deep learning and convolutional neural networks to develop “a new slant on that approach.”

The goal is “large repositories of common-sense knowledge,” he added.

On a separate track, AI researchers are applying existing benchmarks and exploring new metrics in an attempt to gauge progress towards mechanical acumen. For instance, web browsing has been used to assemble repositories of machine common sense capable of answering queries based on natural language and images. The results were tested against an Allen Institute for Artificial Intelligence benchmark geared toward machine common sense.

“Those are all parts of the way we evaluate ourselves on an ongoing basis throughout the program,” Turek said.

In its infancy, common-sense AI appears to be transitioning from crawling to taking its first, tentative steps. “We’re still a ways away from that highly trusted, mission critical system that has the sort of flexibility of human learning and has the breadth of knowledge that a human has,” Turek acknowledged. Still, university researchers working on the Machine Common Sense effort are making progress in areas such as flexible learning, applying their early results in robotic systems.

“Can your robot handle stairs if it’s never been trained on stairs? That’s something these algorithms are starting to demonstrate.”

In another example, Turek noted that a bipedal robot developed by Oregon State University engineers recently completed a 5K foot race.

The DARPA official conceded these early robotics advances remain far from human ability to make sense of the world. Still, Turek concluded, those demonstrations represent “promising early signs for where these much more flexible learning algorithms can demonstrate some concrete, real-world utility.”

This article was originally published on EE Times.

George Leopold has written about science and technology from Washington, D.C., since 1986. Besides EE Times, Leopold’s work has appeared in The New York Times, New Scientist, and other publications. He resides in Reston, Va.

 

Subscribe to Newsletter

Leave a comment