STMicroelectronics, Leopard Imaging introduce integrated perception module for edge AI robotics

The module integrates with NVIDIA Corp. Jetson edge AI computing platforms and the NVIDIA Isaac robotics development framework, enabling real-time sensor data ingestion, simulation, and AI-driven perception.
March 27, 2026
2 min read

GENEVA - STMicroelectronics N.V. in Geneva and Leopard Imaging Inc. in Fremont, Calif., have introduced an all-in-one multimodal vision module for humanoid and advanced robotics systems, integrating imaging, 3D scene mapping, and motion sensing to simplify robotic perception in size-, weight-, and power- (SWaP)-constrained platforms.

The module integrates with NVIDIA Corp. Jetson edge AI computing platforms and the NVIDIA Isaac robotics development framework, enabling real-time sensor data ingestion, simulation, and AI-driven perception.

Related: Airbus and STMicroelectronics collaborate on power electronics for aircraft electrification

Powered by NVIDIA’s Holoscan Sensor Bridge, the module connects to Jetson platforms over Ethernet for high-throughput sensor data transfer and supports development within the Isaac framework, which includes application programming interfaces, AI algorithms, and simulation tools for robotics autonomy.

Multiple sensor tech

The module combines multiple sensing technologies from STMicroelectronics. Vision sensing is based on the VB1940 automotive-grade RGB-IR 5.1-megapixel image sensor, which supports both rolling shutter and global shutter modes, enabling operation in dynamic lighting and motion conditions. The company also introduced the V943 industrial sensor from its BrightSense family, available in monochrome or RGB-IR configurations.

Motion sensing is provided by the LSM6DSV16X six-axis inertial measurement unit, which integrates an embedded machine-learning core for edge processing, sensor fusion, and low-power operation, along with Qvar electrostatic sensing for user-interface detection.

For depth perception, the module incorporates the VL53L9CX direct time-of-flight LiDAR sensor from the FlightSense family. The device provides depth ranging up to 9 meters and supports a 54-by-42 zone resolution with a 55-degree-by-42-degree field of view, enabling detection of small objects and both short- and long-range measurements at frame rates up to 100 frames per second.

By combining RGB-IR imaging, inertial sensing, and time-of-flight ranging in a single module, the system enables sensor fusion for real-time environment mapping, object detection, and motion tracking, which are critical for humanoid robots operating in unstructured environments.

For more information, please visit https://www.st.com.