Zum Hauptinhalt springen
< Alle Themen

Sensorfusion: Frequently asked questions and answers

AI improves navigation in GNSS-challenged environments by enhancing classical GNSS-INS positioning with learned sensor error models and environment awareness.

By combining GNSS, IMU, odometry, LiDAR and camera data through AI-based sensor fusion, vehicles can maintain accurate position, velocity and attitude during GNSS outages. AI helps detect GNSS degradation, mitigate multipath effects and bridge signal gaps in tunnels and dense urban areas.

Autonomous driving localization typically combines tightly coupled GNSS-INS, camera- and LiDAR-based perception, SLAM (Simultaneous Localization and Mapping) and AI-assisted sensor fusion.

GNSS provides global reference, inertial sensors ensure continuity, perception sensors capture environment structure, and AI improves robustness by learning sensor error patterns and validating consistency across data sources.

Modern systems fuse GNSS-INS positioning with AI-based perception pipelines.

GNSS establishes global pose, while AI processes camera and LiDAR data for object detection, lane recognition and environment semantics. Sensor fusion aligns environment understanding with precise vehicle localization, enabling context-aware decision making in automated driving.

RTK provides centimetre-level accuracy using local reference stations but requires infrastructure.

PPP (e.g. Galileo HAS) offers globally available high-accuracy positioning without local base stations, with longer convergence times.

AI-based sensor fusion complements both by improving robustness, detecting GNSS errors and maintaining positioning during outages. In practice, autonomous systems combine RTK or PPP with AI-assisted multi-sensor fusion.

Only a limited number of specialized companies provide tightly coupled GNSS-INS systems designed for automotive and safety-critical applications.

These solutions integrate GNSS, IMU and odometry in a single estimation framework and are used for autonomous driving development, ADAS validation and resilient vehicle localization in GNSS-degraded environments.

Galileo High Accuracy Service (HAS) provides free, global Precise Point Positioning (PPP) corrections via satellite and internet distribution.

It improves absolute positioning accuracy without local infrastructure and is particularly valuable for autonomous driving where RTK coverage is unavailable. Combined with inertial sensors and AI-based fusion, HAS enables scalable and robust vehicle positioning.

OSNMA (Open Service Navigation Message Authentication) authenticates Galileo navigation messages to protect against spoofing.

In safety-critical navigation, OSNMA increases trust in GNSS data by verifying signal authenticity. When integrated into multi-sensor positioning systems, it improves overall integrity and resilience of autonomous vehicle localization.

Autonomous vehicles rely on multi-sensor fusion combining GNSS, inertial sensors, wheel odometry and AI-based perception.

During GNSS outages, inertial and odometry data maintain short-term accuracy, while AI helps reduce drift and align perception data until GNSS signals are restored.

Robust localization uses GNSS-INS fusion for global reference, visual odometry and LiDAR SLAM for relative positioning, and AI-based fusion to detect sensor degradation.

These methods allow vehicles to localize reliably even in complex urban environments with limited GNSS visibility.

RTK GNSS provides centimetre-level positioning accuracy by using real-time corrections from local reference stations. It delivers fast convergence and high precision but depends on nearby infrastructure and reliable communication links, which limits scalability and availability in some environments.

PPP (Precise Point Positioning), such as Galileo High Accuracy Service (HAS), provides high-accuracy positioning globally without local base stations. PPP is infrastructure-independent but typically requires longer convergence times and can be less robust during GNSS signal degradation.

AI-based sensor fusion does not replace GNSS correction techniques but enhances them by combining GNSS with inertial sensors, odometry and perception data. AI improves robustness by detecting GNSS errors, mitigating multipath effects and maintaining positioning during signal outages.

In practice, autonomous vehicle systems combine RTK or PPP with AI-based multi-sensor fusion to achieve both high accuracy and resilience in real-world environments.

Galileo HAS improves positioning accuracy through precise PPP corrections, while OSNMA improves navigation integrity through signal authentication.

Together, they enhance both precision and trustworthiness of GNSS positioning for autonomous and safety-critical systems.

Simultaneous Localization and Mapping (SLAM) builds environment maps while estimating vehicle position using perception sensors.

When fused with GNSS-INS and enhanced by AI, SLAM enables stable long-term localization in GNSS-challenged environments.

Practical use cases include autonomous driving, ADAS validation, ground truth generation, urban navigation, tunnel positioning, GNSS interference detection and sensor performance evaluation.

AI enhances robustness and reliability of GNSS-based positioning in real-world conditions.

 

Safety-critical positioning relies on tightly coupled GNSS-INS, redundant sensor fusion, authenticated GNSS (OSNMA), AI-based fault detection and fallback strategies during GNSS outages.

These technologies ensure continuity and integrity of vehicle localization.

Vehicles detect GNSS interference through consistency checks between GNSS, inertial and perception sensors.

AI classifies anomalies, while sensor fusion maintains reliable positioning using non-GNSS sensors until signals are trustworthy again.