Sensor Fusion in Outdoor Applications
10.07.2025 - AI-Driven Collision Warning System for Mobile Machinery
Mobile machines operate in complex, unstructured, and cluttered environments. Thanks to the fusion of two measurement principles and AI specifically trained for working environments, a new assistance system analyses the environment with great precision and delivers reliable hazard warnings.
IFM is expanding into the assistance systems segment with a new solution: O3M AI. It consists of a PMD camera system that combines two technologies–time-of-flight and 2D vision. Thanks to this fusion, the collision warning system can more accurately determine obstacle types and distances. Much like adjusting the contrast and brightness on a TV sharpens the image, fusing two data sources enhances a mobile machine’s perception of its surroundings, allowing it to recognise people and objects with greater accuracy.
3D and 2D Video in One System
The 3D PMD sensor for distance measurement and the 2D ethernet camera for person detection are both integrated into a single embedded system. A high-performance processor manages both signals and intelligently correlates them. Simultaneously, a neural network analyses the live image from the high-resolution 2D ethernet camera. To accomplish this, the camera is equipped with its own powerful processor and an AI accelerator (NPU). This enables fast processing of the AI algorithms for person detection directly within the system, enhancing both person and object recognition. Depending on the application, depth mapping from the 3D PMD sensor and the video from the Ethernet camera can be used either separately or in combination. The system also features extraneous light suppression, withstands extreme temperatures, and is protected against water and dust.
IFM’s 3D PMD sensor uses the Time-of-Flight (ToF) method. An infrared light source installed on the vehicle emits modulated, invisible IR light. The sensor detects the reflected light, and measures the distance to the object for each pixel based on the phase shift between the transmitted and received signals.
Distinguishing People from Objects
In order to ensure the efficient operation of mobile machinery in complex environments, however, assistance systems must reliably distinguish between people and objects–and be capable of drawing smarter conclusions than conventional technologies currently allow. This capability is crucial for tasks such as dynamically adjusting speed while reversing or maintaining uninterrupted 360° situational awareness in cluttered environments with blind spots. Ultimately, the goal is to reliably prevent accidents involving personal injury. Yet many systems available today still deliver unsatisfactory results: false alerts and emergency braking are triggered too frequently–a consequence of safety first logic. This reduces productivity and impairs efficiency. For drivers, false alerts mean lost time and added stress. After all, not every bump or dirt pile should trigger an emergency stop.
To enable smarter and more situation-aware driving, IFM leverages the high-precision distance data provided by the 3D sensor. By analysing over 1,000 distance values, the system can calculate braking distances and collision probabilities with high accuracy. Smart features, such as the ability to deactivate 3D object checking, offer additional flexibility: for example, when a vehicle is reversing over a long distance in clear, open surroundings, speed can be increased accordingly. In confined spaces, such as when forklift trucks or municipal vehicles are manoeuvring, O3M AI ensures reliable stopping–but only when it is truly necessary and the detected obstacle is indeed a person and not just a pile of dirt. Depending on the situation, O3M AI can determine whether emergency braking, a controlled stop, or deceleration is the appropriate response.
Trained on Reality
In addition, IFM has developed its own AI-supported person detection system. Instead of relying on off-the-shelf AI, IFM has trained its neural network using proprietary data collected from real-world working environments, significantly improving algorithm accuracy. Conventional AI solutions often depend on sample images that bear little relevance to local work settings. By contrast, O3M AI, powered by proprietary image data, can reliably detect people and objects as they appear in typical working environments–whether it is a person lying on the ground, wearing dark clothing, partially obscured by large tools, or adopting an unusual posture. The system operates effectively in both bright sunlight and twilight, and has a range of up to 25 metres and an accuracy of 10 centimetres. It can assess up to 20 object types simultaneously.
The results of the data analysis are transmitted to the machine control system via CAN bus or ethernet and signalled to the driver. Information and warning messages are displayed on a screen inside the vehicle. The sensor system overlays warning symbols, icons, line objects, and text onto the video image, seamlessly integrating them with the video signal. The digital video output supports the most common codecs, including H.264, H.265 and MJPEG via fast ethernet. Two independent video streams can be configured and used simultaneously.
The system is configured using the IFM Vision Assistant software. Even complex setups involving multiple 3D sensor systems can be configured easily, without requiring specialised expertise. The 2D/3D smart camera system simplifies work in off-road environments and enhances safety.