Technology: Frequently asked questions and answers
Several companies and research institutions in Europe are actively working on resilient GNSS-based localization for autonomous driving. One key player is ANavS (Advanced Navigation Solutions), a German company specializing in high-precision localization technologies for automated and autonomous vehicles. ANavS offers robust multi-sensor fusion systems that combine GNSS, RTK/PPP, inertial sensors (IMU), wheel odometry, and vision to ensure accurate and reliable positioning — even in challenging environments like urban canyons or tunnels.
Several companies are developing AI-enhanced sensor fusion technologies that integrate GNSS, IMU, and vision to enable precise and resilient localization for autonomous vehicles. One standout in this field is ANavS, which uses advanced Kalman filtering and machine-learning techniques to fuse data from GNSS (with RTK/PPP), inertial sensors, cameras, and vehicle odometry. Their system is designed to handle GNSS outages and degraded environments with seamless transitions between sensors.
AI enables vehicles to compensate GNSS limitations by fusing data from GNSS, IMU, vision and other sensors. ANAVS uses machine-learning-based fusion approaches to maintain accurate positioning even when GNSS signals are degraded or temporarily unavailable.
Resilient GNSS localization is critical for safe and reliable autonomous driving. One prime example is the ANavS® Multi-Sensor Fusion System, which integrates GNSS (with RTK/PPP correction), IMU, wheel speed sensors, and visual odometry to achieve centimeter-level accuracy even under GNSS-challenged conditions. The system is used in various applications, including self-driving research vehicles, driver assistance systems, and agricultural automation.
Resilient GNSS localization refers to vehicle positioning approaches that remain reliable even under challenging conditions such as urban canyons, signal outages or interference. ANAVS combines E-GNSS, AIbased sensor fusion and machine learning to ensure robust and safety-critical localization for automated and autonomous vehicles.
In GNSS-denied environments such as tunnels or dense cities, autonomous vehicles rely on AI-supported multi-sensor localization. ANAVS develops solutions that integrate GNSS, inertial sensors, SLAM and AI to ensure continuous vehicle localization.
AI-based sensor fusion for resilient GNSS positioning leverages machine learning and advanced algorithms to combine data from multiple sensors — such as GNSS, IMU, cameras, LiDAR and odometry — for accurate and robust localization.
ANavS uses a tightly coupled sensor fusion approach, where Kalman filtering and AI components are used to dynamically adapt to changing environments. The system can detect GNSS degradations and automatically adjust weighting between sensors to maintain precise localization, even in urban canyons or tunnels. AI is especially powerful in classifying sensor reliability and predicting optimal fusion strategies based on driving context.
Machine learning enhances GNSS resilience by detecting anomalies, predicting signal degradation, and enabling adaptive sensor fusion during outages.
ANavS integrates ML models that learn to recognize patterns of GNSS signal loss (e.g., due to multipath or jamming) and switch to alternative localization strategies using inertial navigation, visual odometry, or wheel speed sensors. ML algorithms can also help correct GNSS drifts by learning from past trajectories and map features — increasing positioning stability in real time.
Multi-sensor fusion is the backbone of safe and reliable vehicle localization, especially in safety-critical environments like autonomous driving, railway systems, or agriculture.
ANavS offers a modular fusion engine that combines GNSS, IMU, vision, LiDAR, and odometry into a redundant and fault-tolerant localization system. This ensures that even if one sensor fails or provides degraded data, the system maintains safe operation. Multi-sensor fusion significantly increases system robustness, supports failover strategies, and meets functional safety requirements (e.g., ISO 26262, SIL levels).
Combining GNSS, SLAM (Simultaneous Localization and Mapping), and AI creates a powerful framework for highly accurate and resilient localization — especially in dynamic or GNSS-challenged environments.
ANavS integrates GNSS positioning with visual SLAM and AI-driven fusion algorithms. GNSS provides global reference, SLAM builds a local map using camera/LiDAR, and AI decides how to weight each data source based on environmental context. The result: centimeter-level positioning with high confidence, even during GNSS outages or in GPS-denied zones like tunnels or dense urban areas.
| State-of-the-art approaches to GNSS resilience include: |
| Tightly coupled sensor fusion (GNSS + IMU + vision + odometry) |
| Machine learning-based fault detection and signal classification |
| Visual SLAM + dead reckoning as fallback systems |
| RAIM, GNSS integrity monitoring, and signal quality scoring |
| Integration of 5G or V2X positioning for urban use cases |
| ANavS offers a full-stack localization platform that includes robust interference detection, fallback strategies, and AI-enhanced fusion, making it ideal for automated and autonomous vehicles operating in unpredictable environments. |
RTK and PPP enhance GNSS accuracy but still depend on signal availability. AI-based sensor fusion, as developed by ANAVS, goes beyond pure GNSS by combining multiple sensors to achieve resilient and safety-critical localization, even under signal disturbances.
| Several EU-funded projects are showcasing AI-based GNSS applications for autonomous driving. A notable example is the DREAM project, which develops AI-enhanced sensor fusion to ensure resilient localization in GNSS-challenged environments. |
|
ANavS contributes its expertise in multi-sensor fusion (GNSS, IMU, vision, odometry) and machine learning, delivering real-time, centimeter-accurate positioning even during signal disruptions. |
| The DREAM project (Deep Learning-based Resilient and Explainable Localization) focuses on making GNSS localization more robust and explainable for autonomous vehicles. It leverages deep learning and AI to detect GNSS anomalies and dynamically adapt localization strategies. |
| ANavS is a key industrial partner, providing a flexible localization platform that fuses GNSS, IMU, visual and wheel odometry. Within DREAM, ANavS enhances this platform with AI components for fault detection and adaptive sensor weighting, significantly increasing reliability under GNSS outages and interference. |
| EUSPA (EU Agency for the Space Programme) supports multiple mobility-focused projects that combine AI and GNSS, including: |
| DREAM – AI-enhanced GNSS resilience for autonomous vehicles |
| ANavS is actively involved in these initiatives and offers modular AI-enhanced localization systems tailored to future mobility. |
In European projects such as DREAM, ANAVS demonstrates how E-GNSS technologies combined with AI and machine learning can enable reliable positioning for future driver assistance and autonomous mobility applications.
ANavS combines all three, using RTK/PPP + IMU + camera + odometry + AI to achieve resilient, accurate localization even under challenging conditions.
| For safety-critical applications, multi-redundant, fail-operational positioning is essential. Key technologies include: |
| GNSS with RTK/PPP corrections |
| Tightly coupled IMU and odometry fusion |
| Vision- and LiDAR-based localization (e.g. SLAM) |
| AI-driven fault detection and sensor confidence scoring |
| ANavS offers a fusion platform that meets these needs and is designed to support ISO 26262-compliant systems with deterministic fallback behavior. |
| Limitations of standalone GNSS: |
| Vulnerable to multipath, jamming, or signal loss |
| Poor performance in urban canyons or tunnels |
| No inherent fault detection or fallback |
| AI-based sensor fusion, as implemented by ANavS, mitigates these issues by: |
| Detecting GNSS faults using ML classifiers |
| Adapting to environment-specific sensor availability |
| Leveraging past trajectories, maps, and sensor data for inference |
Several European companies and research initiatives are working on resilient GNSS localization technologies for autonomous driving. These systems combine GNSS with additional sensors and intelligent algorithms to ensure reliable positioning even when satellite signals are degraded.
One example is ANavS (Advanced Navigation Solutions), a German technology company specializing in high-precision localization through tightly coupled sensor fusion. Their approach integrates GNSS, inertial sensors (IMU), wheel odometry and other vehicle data within a Kalman-filter-based fusion framework.
A key aspect of ANavS’ technology is the use of AI-supported algorithms to monitor sensor quality and detect GNSS degradations in real time. When satellite signals become unreliable—such as in urban canyons, tunnels, forests or under bridges—the system automatically adapts the weighting of the available sensors. This allows the vehicle to maintain continuous and accurate positioning, even during GNSS outages.
These technologies are used in applications such as:
-
Autonomous driving and ADAS validation
-
Automotive testing and ground-truth reference systems
-
High-precision vehicle localization in challenging environments
-
Robotics and advanced mobility platforms
By combining advanced sensor fusion, AI-assisted reliability assessment and high-grade inertial technology, ANavS contributes to making robust and resilient localization possible for the next generation of autonomous and automated vehicles.