Reports are circulating that the Seattle-based AI at the edge company Xnor has been quietly acquired by Apple. An investigation by GeekWire suggests the deal was worth in the region of $200 million. This development could mean Xnor’s low-power algorithms for object detection in photos end up on the iPhone.

Xnor, a spin-out from the Allen Institute for Artificial Intelligence (AI2), had raised $14.6 million in funding since it was founded three years ago. Xnor’s founders, Ali Farhadi and Mohammed Rastegari, are the creators of YOLO, a well-known neural network widely used for object detection.

Xnor abruptly pulled out of a licensing deal with fellow Seattle startup Wyze Cam, a maker of smart security cameras, in November.

Xnor’s solution for embedded processors is based on binarized neural networks (BNNs), which use binary values for activations and weights, instead of full precision values. This reduces model size and memory requirements. Xnor-net, the first binarized convolutional neural network, can detect objects in images using very little processing power while maintaining accuracy.

Xnor FPGA Demo Board

Xnor’s low-power FPGA demo, using a Lattice ECP5 device, could inference 32 frames per second for person detection using just 48mW. Xnor also demonstrated this system running indefinitely from a small solar cell (Image: Xnor)

These models and techniques can be used for image processing in resource-constrained environments, such as smartphones, but also security cameras and other remote sensor networks. Image data can then be processed in the edge device, rather than sending data to the cloud, which would incur high latency and spark privacy concerns.

For example, an impressive Xnor demo running on a Raspberry Pi Zero was capable of person detection at 8 frames per second.

“That’s a 50-cent CPU, not normally considered a viable platform for edge inference,” said Xnor VP engineering Peter Zatloukal at the Embedded Vision Summit earlier this year.

The company’s demos also included state of the art person detection using deep learning on a $2-FPGA (Lattice ECP5). This demo could inference 32 frames per second for person detection using 48mW (1.5mJ per inference). The power requirements were so small that the demo was powered by ambient sunlight via a small solar harvester. It could run indefinitely with no external power input, Zatloukal said.

The company’s developer platform, AI2GO, comprised software development kits for embedded devices, plus pre-trained AI models optimised for embedded devices.

Other recent Apple deals include the acquisition of British image fusion startup Spectral Edge and a license deal with Imagination Technologies.