Inuitive has licensed and deployed the NU4000, a CEVA-XM4 intelligent vision DSP in its AR/VR and computer vision SoC, the NU4000.
Ceva has revealed that Inuitive, a developer of advanced depth sensing, computer vision and image processing SoCs, has licensed and deployed the CEVA-XM4 intelligent vision DSP in its AR/VR and computer vision SoC, the NU4000.
Inuitive will leverage the CEVA-XM4 to run complex, real-time depth sensing, feature tracking, object recognition, deep learning and other vision-related algorithms targeting a range of mobile devices, including augmented and virtual reality headsets, drones, consumer robots, 360-degree cameras and depth sensors. In addition, developers and OEMs will be able to leverage the open, programmable nature of the CEVA-XM4 in the Inuitive SoC to add their own differentiating features and algorithms via software, including their own neural networks which can be implemented via the CEVA deep neural network (CDNN) framework.
The NU4000 SoC builds on the success of Inuitive’s NU3000 multi-core image processor that used the third-generation CEVA-MM3101 imaging and vision DSP for stereoscopic vision. NU3000 serves as part of the Google Project Tango ecosystem, where developers can use it to power applications requiring real-time depth generation, mapping, localisation, navigation and other complex signal processing algorithms.
CEVA’s imaging and vision DSPs target the extreme processing requirements of the most sophisticated computational photography and computer vision applications such as video analytics, augmented reality and advanced driver assistance systems (ADAS). By offloading these performance-intensive tasks from the CPUs and GPUs, the highly-efficient DSP dramatically reduces the power consumption of the overall system, while providing complete flexibility. The platform includes a vector processor developed specifically to deal with the complexities of such applications and an extensive application development kit (ADK) to enable easy development environment.
The CEVA ADK includes an Android Multimedia Framework (AMF) that streamlines software development and integration effort, a set of advanced software development tools and a range of software products and libraries optimised for the DSP. For embedded systems targeting deep learning, the CDNN real-time neural network software framework streamlines machine learning deployment at a fraction of the power consumption of the leading GPU-based systems.