If you are looking for an interesting machine vision company to keep your eye on (pun intended), look no further than Algolux. With approximately 40 employees, Algolux is not a big company, but the work they are doing could be game-changing for the machine vision industry.

The first time I heard about Algolux was circa 2016 in regard to their Atlas Camera Tuning Suite (previously named CRISP-ML) — an artificial intelligence (AI) machine learning (ML) platform used to automate the complex optimization of vision systems and imaging pipelines, including the optics (lenses), sensors, processors, and image/video processing algorithms.

Next came Eos Perception Software, an end-to-end neural network stack that can be embedded in any vision system to deliver improvements in perception accuracy of more than 30% as compared to today’s most accurate computer vision algorithms, especially in the harshest scenarios.

Speaking of harsh usage scenarios, many demonstrations of computer vision systems take place under optimal conditions, but the folks at Algolux also target their systems at dusty, dirty, and foggy environments. As they say, if all you have is a camera whose lens is smeared with gunge and grime, then you have to make the best of what you’ve got.

More recently, Algolux announced its Ion Platform — an autonomous vision system design and implementation platform that enables creators of next-generation products to design their vision systems from end-to-end.

As part of all this, Algolux has also been winning awards right, left, and center, including the prestigious Vision Product of the Year Award at the 2018 Embedded Vision Summit.

But none of the above was what I wanted to talk about here (sorry).

Algolux’s mission statement reads as follows:

Our mission is to enable autonomous vision — empowering cameras to see more clearly and perceive what cannot be sensed with today’s imaging and vision systems.

Well, their latest announcement about enabling vehicle and smartphone cameras to see around corners using non-line-of-sight (NLSO) technology certainly falls in line with their stated mission.

It seems that a team of researchers from Algolux, the University of Montreal, and Princeton University has developed a new method that lets conventional color cameras, like the ones in your smartphone or in vehicles, see hidden objects that are occluded by walls or other objects in the scene.

The team has achieved unprecedented resolution for NLOS imaging by being able to see objects in high-resolution and color around corners for the first time. The researchers from academia and industry were able to reconstruct high quality images of traffic signs and other 3D objects without looking directly at those objects.


Example of applying high-resolution non-line-of-sight imaging for a real-world driving application. (Source: Algolux)

The idea, in a nutshell, is to detect, isolate, and identify the ghost images of hidden objects reflected off of other objects that are visible. Now, I must admit that when I first saw the example image shown above, my knee-jerk reaction was “Why would I be interested in seeing a stop sign hidden behind a wall?” On reflection (again, no pun intended), however, I began to envisage all sorts of interesting possibilities.

As a general rule-of-thumb, the more you know, the better off you are. The term “situational awareness” refers to the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status. Less formally, situational awareness means knowing what’s going on around you.

Take autonomous vehicles, for example, it cannot be denied that the more they are aware of what’s taking place around them, the better are the chances of us all surviving to fight another day. I’ve just thought of the following situation. Suppose you are cruising down one arm of a Y-shaped intersection poised to merge with the other arm as illustrated below:

seeing other cars

Example of applying high-resolution non-line-of-sight imaging for a real-world driving application. (Source: Max Maxfield)

Now suppose that your vehicle is equipped with NLOS imaging technology, so — even though there’s a hill between you and the other car — it can detect images reflected off objects like the truck, thereby providing it with a clue as to what’s going on and what to expect. Whichever way you look at it, this has got to be a good thing.

Similarly, consider robot buggies and forklift trucks transporting things around warehouses; being equipped with NLOS imaging technology would greatly improve their situational awareness with regard to other robots, humans, and objects skulking behind corners.

Of course, there’s always the consideration that it may not be a good idea to equip AI-powered robots with superhuman capabilities like the ability to see around corners — see “One Metallic Step Closer to the Robot Apocalypse” — but hopefully such a scenario is not lurking in our future.

I’ve only just started thinking about NLOS imaging — and the work being done by Algolux et al is still in an early stage of development — but ideas are bouncing around my brain like firecrackers. How about you, can you think of any interesting applications for this technology?