AI Safety Moves to the Forefront

Article By : George Leopold

Safety advocates call for a national AI testbed, with trust based on 'engineering discipline'.

The splashy unveiling of Tesla’s robot assistant stokes the ongoing debate about AI safety and how automated systems can be tested and validated before they are unleashed on city streets and factory floors.

The fear during the initial round of AI hyperbole focused on malevolent, self-replicating, HAL-like machines eventually overpowering their creators or roaming uncontrolled on battlefields. The debate has since become more pragmatic, with a sharper and welcome focus on safety. Specifically, how can we promote AI safety in ways the will allow human operators to trust autonomous systems in applications that for now remain well short of mission critical, requiring 99.999 percent reliability?

A positive first step embraced by regulators is recognizing that AI incidents involving vehicles using driver-assistance systems are “crashes,” not “accidents,” notes automotive analyst and EE Times blogger Egil Juliussen. “The auto industry is moving toward ‘crash’ since it is someone’s fault or someone is to blame,” Juliussen said. “The term ‘accident’ often gives someone a free ride.”

In a series of policy briefs on AI safety, researchers at the Center for Security and Emerging Technology at Georgetown University attempt to identify the engineering requirements for achieving safer AI systems.

“Today’s cutting-edge AI systems are powerful in many ways, but profoundly fragile in others,” note authors Zachary Arnold and Helen Toner. “They often lack any semblance of common sense, can be easily fooled or corrupted, and fail in unexpected and unpredictable ways.

“It is often difficult or impossible to understand why they act the way they do,” the researchers concluded, adding that the degree of trust placed on fallible AI systems “could have terrible consequences.”

The benign Tesla Bot. (Source: Tesla)

A central problem is understanding how black-box AI systems function—or what has come to be called AI “explainability, as in a math teacher demanding that students “show their work.”

Hence, the AI researchers propose formation of a national AI testbed that would begin to set the parameters for ensuring safe AI systems based on deep learning. “Today, there is no commonly accepted definition of safe AI, and no standard way to test real-world AI systems for accident risk,” conclude authors Arnold and Toner.

While there are proven methodologies for testing earlier expert systems used in fail-safe applications like aircraft autopilots, there is no AI equivalent. “Those methods just don’t work for deep learning,” Toner stressed in an interview.

“We think that a lot more effort should be put into developing new methodologies as we start [using] these systems in places where mistakes or malfunctions could be really serious,” she added. “That we have ways of testing [AI systems] in advance and ensuring that we know what they can do and what they can’t do, and when they will work and when they won’t work.”

Pioneering companies like Tesla may be coming around to this view even as they push the AI technology envelop with prototypes like Tesla Bot. Tesla CEO Elon Musk, said a bot prototype could be unveiled next year.

Elon Musk promises Tesla Bot will be “friendly”.

While promoting the “eerily good” predictability of his Tesla Autopilot, an assertion regulators are beginning to question, Musk acknowledged unintended consequences in announcing the Tesla Bot during the car maker’s recent AI Day event.

Tesla Bot “is intended to be friendly, of course,” Musk reassured. “We’re setting it such that it is—at a mechanical level, at a physical level—you can run away from it,” a safety measure that drew some laughs, “and most likely overpower it. Hopefully that doesn’t ever happen, but you never know.”

Musk hedged a bit, as well he should have. One reason, AI safety researchers note, is that Tesla Bot and others recent examples represent an early, evolutionary step.

“An engineering discipline for AI doesn’t actually exist,” said Toner of the Georgetown AI center. “There are no technical standards, there is no understanding of what performance we would want to achieve and how we could tell if we were achieving it.”

AI development is reaching an inflection point, Toner added. “It clearly could be useful for a lot of things, but so far were only using it for mostly pretty low stakes. The question is, can we solve the reliability [and] trustworthiness challenges in order to unlock this much wider space of higher-stakes applications?

“To me, that’s still a question mark.”

This article was originally published on EE Times.

George Leopold has written about science and technology from Washington, D.C., since 1986. Besides EE Times, Leopold’s work has appeared in The New York Times, New Scientist, and other publications. He resides in Reston, Va.

 

Subscribe to Newsletter

Leave a comment