Panelists, which included representatives from industry, trade associations, politics and government, were in agreement that leadership in AI is not a simple win-lose conundrum.
The support of the public is paramount to any country or region’s successful global leadership in transforming businesses, markets and economies using the technology, a panel discussion at CES concluded.
To start, the panelists, which included representatives from industry, trade associations, politics and government, were in agreement that leadership in AI is not a simple win-lose conundrum.
“I would not agree with the idea that it’s a zero-sum game, that one nation is leading and that it’s a race and someday you declare a victory, and then everybody else is a loser… I think everyone can benefit from the advantages of artificial intelligence,” said Lynne Parker, deputy CTO for the United States, from the White House’s Office of Science and Technology Policy.
Parker’s definition for a nation that leads in the field of AI is one with a lot of companies leading in terms of innovation, one with many leading universities in that field with cutting-edge ideas, and one with a strong innovation ecosystem which works closely with academia and the industry to foster innovation. The US is leading, according to those metrics, she argued.
“There will be winners, but maybe it’ll be first place, second place, third place, fourth place, and not winner take all,” said Michael Beckerman, president and CEO of the Internet Association. “From the companies standpoint, certainly [leadership] will come from innovation and the ability to put in place transparency and safeguards to ensure there’s not bias or discrimination through artificial intelligence, and ensuring that ethics are set up in a way that meets our common goals and standards… and from the government standpoint, making sure that policies are in place that both encourage and allow for innovation.”
These policies must ensure there are safeguards for both government and private sector use of AI, Beckerman said, pointing out that some of the riskier potential applications in terms of public trust are for government applications of the technology.
The panelists also agreed that consumer trust was of paramount importance, in a way that hasn’t been seen with other technology trends in the past.
“For both private sector and public sector use, you can’t truly win or succeed with AI deployments, with scaling AI, unless you have consumer and citizen trust,” said Adelina Cooke, North American AI policy lead at Accenture. “Any [leadership] race is going to need to engender trust among the population. When we are thinking about scaling [up AI applications], it’s not just feasibility and innovation, it’s making sure that you have the proper governance and responsible oversight within an organization [that’s important].”
Panelists took different views on the role governments should play in increasing AI leadership in their countries.
The White House’s Lynne Parker described the US government’s hands-off approach to regulation of the use of AI technology.
“Certainly, I think at the beginning, the role of the federal government is not to get in the way,” she said. “We want to foster innovation and make sure it’s being used in ways we can all benefit from, but… there are many areas in which we need to have more oversight.”
AI presents a unique challenge, she said, in that there are many existing laws that protect Americans from things like discrimination, and the country has a robust legal system to help enforce these laws. If these laws are enforced at the state and local government level, companies have to deal with a patchwork of laws and regulations, which hampers innovation in every locale.
“At some point, the federal government needs to step up and say, okay, we’re actually hampering innovation by not having regulatory oversight or a process for it, or having any consistency,” she said.
The White House released a draft memo earlier this week which will establish consistent guidelines for regulatory agencies, which should help protect the public and also help the innovation ecosystem by providing companies with some predictability in terms of regulatory approach, she said.
Italian member of parliament Mattia Fantinati detailed both the European Commission’s approach and the approach in Italy.
The European Commission’s strategy for AI leadership is based on several key ideas. These include boosting technological and industrial capacity, uptake of AI across the economy with private-public partnerships, being prepared for the socioeconomic changes which will happen quickly, and ensuring a legal and ethical framework for innovation to flourish within.
Italian initiatives for adoption of AI are focused on small and medium enterprises (SMEs), reflecting the country’s economy, he said.
“Most developed countries have adopted an AI strategy that reflects their social and political system,” he said, noting that Italy is home to many SMEs in manufacturing and handicrafts. “My role is to… create a collaboration between the masters of handicraft and artificial intelligence. It’s not easy, but we have to do it, because the European strategy is focused on the SME.”
The European Commission’s strategy includes using public funding to stimulate private investment, particularly with early-stage startups.
USA vs. China
Asked by an audience member about Kai Fu Lee’s 2018 book “AI Superpowers: China, Silicon Valley and the New World Order” in which he details China’s strengths in this arena, Parker again referenced the importance of public trust.
She mentioned Lee’s postulation that China is very good at taking existing ideas and implementing them.
“At the same time, I think we, as the free world, also care about exactly how these technologies are used,” she said. “We want to make sure that we don’t use the technologies in ways that are inconsistent with the values of our nations.”