AI Ethics Initiative Promotes Ethical Principles into Edge Devices

Article By : Sally Ward-Foxton

Chipmaker’s AI ethics initiative promotes explainability, security, privacy and vigilance...

NXP has launched an AI ethics initiative intended to encourage the ethical development of AI systems in edge devices. The initiative, a framework of five key principles, is intended for NXP to use when developing AI applications or AI enabling technologies, but the company hopes to also set a good example for its customers.

Edge AI systems today include all manner of devices that sense their environment and analyze the data in real time, on-device. This might be a smartphone using facial recognition to unlock itself, or home appliances that respond to the user’s voice commands. Many use NXP’s microcontrollers and application processors that are optimized for machine learning tasks.

NXP started work on its AI ethics framework 18 months ago, following the model of the successful Charter of Trust for IoT Security, a cross-industry initiative founded in 2018. Input and insights were sought from engineers and from customers around the world.

NXP’s five key principles for ethical AI systems are:

  • Non-maleficence. Systems should not harm human beings and algorithmic bias should be minimized through ongoing research and data collection.
  • Human autonomy. AI systems should preserve the autonomy of human beings and warrant freedom from subordination to — or coercion by — AI systems.
  • Explainability and transparency. Vital to build and maintain trust of AI systems — users need to be aware they are interacting with AI and need the ability to retrace the system’s decisions.
  • Continued attention and vigilance. To promote cross-industrial approaches to AI risk mitigation, foster multi-stakeholder networks to share new insights, best practices and information.
  • Privacy and security by design. These factors must be considered from the start; they can not be bolted on as an afterthought. Traditional software attack vectors must be addressed, but they alone are not sufficient. Strive to build new frameworks for next-gen AI/ML.

The framework will serve as a model both for NXP itself and for NXP’s customers, said Svend Buhl, head of government affairs for NXP Germany and chairman of NXP’s global government affairs board.

Svend Buhl NXP AI Ethics Initiative
Svend Buhl (Image: NXP)

“Before any sort of implementation can happen, we need a robust and solid framework that we all agree with and that we commit to,” Buhl told EE Times. “That framework is of course thought of as a reference scheme for our customers. Some of them are already working with their own ethical codes, and so we had to see that these objectives match with the interests of our customers. But we first want to implement this culture in our company, and see it reflected in our portfolio moving forward.”

Buhl said that what NXP sees in the semiconductor industry is that companies have come up with ethical frameworks, but they are often solely used as a recommendation for policy makers or regulators, rather than as practical guidelines for system development.

“That’s not what we intend,” said Buhl. “We really wanted to give ourselves a framework for the creation and development of artificial intelligence systems and their components.”

Buhl is hopeful that NXP’s framework will inspire other companies to follow their example, citing the Charter of Trust initiative on IoT security which now has 15 members from across the industry as a success story. But is a silicon vendor the right company to be leading an AI ethics initiative – surely the ethical responsibility lies with the system implementor, that is, NXP’s customer?

“I think it’s absolutely vital that we do it, because if the architecture of a system doesn’t enable the implementation of certain tools, even if a customer wants to create ethically compliant systems, he will have a hard time to actually do this,” said Buhl. “And besides, since we are providing entire solutions with huge components also from the ecosystem libraries, compilers, tools… I think it is very important that we set an example here and deliver into the market solutions that are ethics-ready.”

Principles like security by design and privacy by design are certainly within NXP’s domain. Features like secure boot, secure key management, secure updates, encrypted communication and protection of the machine learning models are a big part of this. NXP also provides hardware-based measures to prevent adversarial attacks, misuse and data poisoning.

“All these things in combination are what counts when you want to trust your system and build a model that actually follows your own or your customers’ understanding of an ethical moral code, because when the system design is flawed from a security perspective from the beginning, it may be very hard to achieve an ethically compliant and trustworthy system,” Buhl said.

EE Times wondered whether NXP, with its HQ in Europe, might have a slightly different moral code to some of its customers in other parts of the world. For example, while end users’ data protection is a robust tenet of European law (covered by the notoriously strict GDPR, the general data protection regulation), this principle is less well-established in other geographies. Does Buhl think customers around the world will be as ready to adopt this ethical framework as its European customers will?

“That remains to be seen, but I’m positive!” Buhl said, noting that in North America the debate around ethical responsibility of creators of AI systems is gaining pace. For example, there has been much debate about AI ethics initiatives published by companies such as IBM, Google and Microsoft. Buhl said NXP has also seen “a new awareness in China around ethically responsible use of data collection.”

“I think there is a global awareness,” he said. “Of course, some regions of the world put more focus on this than others, that’s correct… There’s still a long way to go.”

Leave a comment