Bright Future for Embedded Vision

Article By : Anne-Françoise Pelé

Embedded vision technology will soon touch nearly every aspect of our daily lives, but what is the status of the technology already in use?

Embedded vision technology will soon touch nearly every aspect of our daily lives, but what is the status of the technology already in use? What role does AI play today? What is happening at the edge and in the cloud? These questions were the focus of the panel discussion on the trend topic “Embedded Vision” at last week’s embedded world 2021.

Embedded Vision

Driven by advances in sensors, processors and software, embedded vision is going everywhere–from agriculture to factories, and from autonomous vehicles to professional sports. Even the Covid-19 pandemic has served to accelerate its deployment with vision systems being used in applications like public surveillance, health and safety inspection.

AI-enabled embedded vision

Artificial intelligence (AI) is gaining momentum in embedded vision and image processing applications as developers increasingly apply deep learning and neural networks to improve object detection and classification.

It makes no doubt that AI opens up new possibilities, but panelists agreed it has to be easier to use. “On the one end, there are a lot of benefits from the customers’ perspective,” said Olaf Munkelt, managing director of MVTec Software. “On the other end, AI technology is a little clumsy. We have to make it easier to use, to enable embedded vision customers to quickly get to the point where they see an added value. This has to do with all the steps in the workflow of AI-based systems from data labeling, data inspection, data management up to the processing using different technologies like semantic segmentation, classification, and anomaly detection.” Munkelt called for an integrated approach to make it easier for customers to deploy an embedded vision project.

Sharing a similar view, Fredrik Nilsson, head of business unit, Machine Vision, at Sick, noted that AI and deep learning have the ability to solve tasks that are difficult to solve with conventional ruled-based image processing. Deep learning will, however, not replace conventional image processing. Both technologies will coexist side by side “for a long time,” he argued. “There are definitely applications where the rule-based [image processing algorithms] are more applicable than deep learning. We can see hybrid solutions, for instance, doing object segmentation with deep learning and applying measurement tools.”

A race is taking place on the AI accelerator hardware side, said Munkelt. Many startups are indeed coming up with “really interesting hardware”, which sometimes “perform 10-20 times better than existing GPU hardware from established vendors.” Looking ahead, he indicated how important speed would become for processing image data. “Everybody in our vision community is looking at these AI accelerators because they can provide a big benefit.”

What happens on the edge? What happens in the cloud?

These questions, put to cloud provider AWS, suggest the answer. Unless it’s more subtle than we think it is.

AWS is pursuing two objectives when it comes to embedded vision. The first, said Austin Ashe, head of strategic OEM partnerships, IoT, at Amazon Web Services (AWS), is lowering the barrier to entry for customers willing to take on embedded vision for the first time or those willing to expand it and scale it. The second is to “deliver value beyond the initial use case”.

“As for lowering the barrier to entry, we recognize that 75% of businesses plan to move from pilot to full operational implementations over the next two to five years. We are positioning ourselves to orchestrate the edge and the cloud in a very unique way.” He further explained, “Edge is extremely important when it comes to things like latency, bandwidth, cost of transmitting data, even security and security come into play. What cloud can do is lower the barrier to entry here. We can monitor devices, whether it may be one device or a fleet of them, and provide real-time alerts or mechanisms to understand what those devices are doing.” These devices, Ashe continued, can be updated over the air. So, when managing embedded vision systems at scale, it is possible to take a model, train it in the cloud, and then deploy it over the air to all the machines that need it.

Companies may not have the data scientists or the money to build the model. For Ashe, lowering the barrier to entry means making it possible to take ten to twelve images of an anomaly and uploading them to the cloud. “Immediately, you get back an anomaly detection model that’s detecting that exact anomaly. Then, you iterate on that model, cloud to edge.”

At this year’s embedded world, Basler and AWS explained how they bridge the gap between edge and cloud through a collaboration covering AWS services “AWS Panorama” and “Amazon Lookout for Vision”. AWS Panorama is a machine learning (ML) appliance and SDK which gives customers the ability to make real-time decisions to improve operations, automate monitoring of visual inspection tasks, find bottlenecks in industrial processes, and assess worker safety within facilities. Amazon Lookout for Vision is a ML service that spots defects and anomalies in visual representations using computer vision.

When asked if embedded vision can solve time-critical tasks in the cloud, Ashe said there is going to be more and more usage of the edge as applications need to be moved closer to the user and to the experience. “Wherever there is latency requirements, edge is going to be the number one priority, but as you consider some of the high speed networks that are coming online, especially things around 5G, that creates a whole new opportunity for cloud and edge to have a closer interoperability and more edge to cloud use cases delivered.”

Complexity, size, cost

Looking ahead to the next few years, panelists listed areas for improvement to enable wider adoption of embedded vision systems.

Complexity: “With the old PC system, you bought your camera, you bought your hardware, you had one processor and the software was running on the processor,” said Arndt Bake, CMO of Basler. Today, however, “the processing is not one processor. You have a CPU, a GPU, you have a special hardware for AI, maybe ISP in the SoC. So, instead of one, you have four hardware resources, and you need to map the software to these four resources.” Systems are getting more complex, and customers are struggling with the ever-growing complexity. To foster penetration, usefulness has to be demonstrated and usability must be addressed. Some companies are currently trying to bring the pieces together and make it easier for customers, because “the easier it’s going to get, the higher the adoption rate and the wider the usage of that technology,” said Bake.

Size: Have we come to a steady state in terms of size? No, replied Bake. “It’s going to get smaller. If you open up your smartphone and look at the processing and camera functionality, you can see how small things can get. The smartphone is going to be our benchmark.”

Cost: From a general perspective, “it’s all about money,” said Munkelt. Today, some applications are not justified because prices are too high. If the cost comes down, new possibilities will arise.

With an increased ease-of-use, lower prices, and smaller devices that fit into existing machineries, embedded vision will be more accessible for smaller companies that haven’t used embedded vision before, concluded Nilsson.

Subscribe to Newsletter

Leave a comment