Careers

Careers

ANavS® is hiring!

We are a team of experts for precise positioning solutions and precise mapping systems. Be part of our company with flat company structures and freedom for innovative ideas. ANavS® is located in the attractive city of Munich, Germany. The nearest underground station is only 50 m from our office, and the closest highway entrance can be reached within only 2 minutes by car. Thereby, the city center as well as the Bavarian lakes, castles and Alps can be easily reached.

If our job offerings don’t quite meet your needs, we may be able to adapt the position for you. Just ask!

Nothing for you in the following vacancy list?
No problem, we are always searching for new team members for master-thesis, working students, full-time employees.
We are looking forward to your initiative job application letter.


You can directly apply here.

Java Software Engineer (m/f/d)

Job advertisement Id FT001 online since 05/2022

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will be part of the agile hardware team and contribute to high level software components of advanced positioning systems. We employ Java in most of our systems for management and maintenance functionalities. Also, we use Java for automated testing and qualification of our sensor fusion code.

Your job will be to develop reliable, flexible and scalable code that keeps our products operational and fault tolerant in a wide range of operational scenarios. Your developments will typically be used in research for autonomous driving, maritime automation, autonomous robots, top-class sports and many more.

Your Core Tasks:

  • Development of management and configuration software for embedded positioning systems
  • Hardening of implementation against security and safety threats
  • Development of Continuous Integration and code quality tools
  • Hardening of implementation against security and safety threats
  • Adaptation of our systems to specific customer projects
  • Development of remote maintenance mechanisms
  • Development of IoT and cloud solutions
  • Close cooperation with the hardware, software and sensor fusion team
  • Patenting and publishing of developed approaches is encouraged

Core Qualifications:

These are required to get the job.

  • High motivation to become part of a great team and to gain new skills
  • Comprehensive experience in Java software development
  • Good communication skills in English and German
  • Good knowledge of Java 8+ APIs, ant, maven, multi-threaded software, networking, object orientation, software testing

Additional Welcome Qualifications:

These will make you stand out between the other job candidates.

  • Experience with other common programming languages. For example: C/C++, Python, Matlab, Bash, JS/HTML5, TypeScript
  • Experience with safety critical programming
  • Theoretical understanding and practical knowledge of cryptography
  • Deep understanding of network technologies
  • Deep understanding of linux based operating systems
  • Experience with web technologies
  • Experience with embedded systems
  • Contributions to open source projects (You’re welcome to share links to your contributions / github account)

 

C/C++ Software Engineer (m/f/d)

Job advertisement Id FT010 online since 05/2022

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will be part of the agile hardware team and contribute to low level software components of the systems. We employ C/C++ in most of our systems for firmware, drivers, and other real time constrained components. Also, we use C/C++ for our sensor fusion code and some internal utilities. Your job will be to develop reliable, flexible and scalable code that keeps our products operational and fault tolerant in a wide range of operational scenarios.

You will work in a small, flexible and growing team with flat hierarchies and expertise in computer vision, deep learning, software development, sensor fusion and embedded hardware development. You contribute to exciting manifold projects, for example in the automotive industry, and work together with partners such as BMW, Continental, Intel, Schaeffler and KIT.

Your Tasks:

  • Development of C/C++ code that interacts closely with the sensor fusion
  • Development of firmware and drivers for next generation positioning systems
  • Hardening of implementations against security and safety threats
  • Adaptation of our systems to specific customer projects
  • Close cooperation with the hardware, software and sensor fusion team
  • Patenting and publishing of developed approaches is encouraged

Your Qualifications:

  • Comprehensive experience in C/C++ software development
  • Computer science degree (or comparable)
  • Good knowledge of C/C, recent features of C++ (C++14 and later) APIs, smart pointer, make, cmake, multi-threaded software, networking, object orientation, software testing
  • Experience with good coding practices and design pattern
  • High motivation to contribute to the technical development, ability to work independently and willing to adapt to flexible tasks
  • Team player with quick perception, reliability and accuracy
  • Good communication skills in English and German
Sensor Fusion Engineer (m/f/d)

Job advertisement Id FT011 online since 05/2022

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will be part of our agile software development team and contribute to our tightly coupled sensor fusion framework. The core of the algorithm is written in Matlab and C/C++. Your job will be to bring/improve the complementary advantages of all our sensors into an extended Kalman filter. Especially a robust position and attitude determination with meaningful integrity-level estimation plays a major in your task.

You will work in a growing team with flat hierarchies and expertise in computer vision, deep learning, software development, sensor fusion and embedded hardware development. You contribute to exciting manifold projects, for example in the automotive industry, and work together with partners such as BMW, Continental, Intel, Schaeffler and KIT.

Your Tasks:

  • Improvement of our tightly coupled sensor fusion framework
  • Development of innovative approaches of position integrity
  • Extending the sensor fusion framework based on new sensors/chips
  • Development of Continuous Integration (CI) and code quality tools
  • Adaption of our sensor fusion framework to specific customer projects
  • Close cooperation with our embedded hardware team
  • Patenting and publishing of developed approaches is encouraged

Your Qualifications:

  • Comprehensive experience in Matlab and C/C++
  • Degree in electrical, geodesy or aerospace engineering (or comparable) with focus on GNSS
  • Experience in statistics and filter techniques (e.g. Kalman filter)
  • Experience with good coding practices and design pattern
  • High motivation to contribute to the technical development, ability to work independently and willing to adapt to flexible tasks
  • Team player with quick perception, reliability and accuracy
  • Good communication skills in English and German
Mobile/Web App Developer (m/f/d)

Job advertisement Id FT005 online since 05/2022

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

Your tasks:
  • Develop a web-based application for our sensor fusion platform
  • Define UI/UX designs for the application
  • Enhance the platform API together with the hardware department
  • Establish a deployment process for the new application
  • Provide a test concept for the application
Your profile:
  • Proficient understanding of web development by using established JavaScript frameworks like React, Angular or Vue
  • Good understanding of native mobile app development for iOS and Android
  • Experience with frameworks like React Native or Ionic would be beneficial
  • Experience with wireless technologies (TCP/IP communication)
  • Good feeling for design and user experience
  • Own projects to show your experience
  • Good communication skills in English and German
  • Team player with quick perception, reliability and accuracy
Embedded Systems Engineer (m/f/d)

Job advertisement Id FT006 online since 05/2022

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will be part of an agile hardware team and develop highly capable next generation sensor hardware. We develop our own PCBs with integrated power supply, battery management, high bandwidth wired and wireless communication, and a multitude of sensors in always evolving novel combinations. Your job will be to develop circuits and layouts, create and maintain supply chains, test and assemble prototypes.

You will work in a small, flexible and growing team with flat hierarchies and expertise in computer vision, deep learning, software development, sensor fusion and embedded hardware development. You contribute to exciting manifold projects, for example in the automotive industry, and work together with partners such as BMW, Continental, Intel, Schaeffler and KIT.

Your Tasks:

  • PCB Design
  • Spice Simulation
  • Prototype integration and evaluation
  • EM-compatibility engineering
  • CE compliance engineering
  • Close cooperation with the hardware, software and sensor fusion team
  • Patenting and publishing of developed approaches is encouraged

Your Qualifications:

  • Comprehensive experience with PCB design tools (Altium Designer or EAGLE)
  • Degree in embedded systems (or comparable)
  • Experience with Linux and wireless communication systems
  • Craftsmanship
  • High motivation to contribute to the technical development, ability to work independently and willing to adapt to flexible tasks
  • Team player with quick perception, reliability and accuracy
  • Good communication skills in English and German
Computer Vision Engineer - Focus on Camera Systems (m/f/d)

Job advertisement Id FT007 online since 17/06/2021

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will contribute to the integration of camera sensors into the existing Multi-GNSS/INS-based sensor fusion positioning system that already achieves centimeter-level accuracy in many scenarios. To bridge challenging GNSS scenarios such as tunnels, urban canyons or trees the fusion of camera-derived pose measurements from visual odometry or SLAM is desired. Besides the primary focus on precise positioning visual sensors are used for mapping and environment detection, which includes 3D point cloud and 2D road mapping, object detection and semantic segmentation.

Aside from computer vision another increasingly relevant topic is integrity monitoring, which may be part of your tasks. This includes monitoring of sensor data, intermediate system states and final positioning outputs to provide system status information and feedback to previous modules. GNSS spoofing detection is part of integrity monitoring in that manipulation of GNSS signals violates the system integrity. Computer vision can contribute to this task by providing alternative positioning solutions.

Your responsibility will cover the whole pipeline starting from the sensor’s integration to the algorithm development and up to the interface implementation for sensor fusion. The goal is the enhancement of our products, in particular our Integrated Sensor Platform (ISP), with real-time capable algorithms on embedded NVIDIA platforms. Accompanying this technical focus, you will take care of funded as well as customer projects that intersect with your technical expertise.

You will work in a small, flexible and growing team with flat hierarchies and expertise in computer vision, deep learning, software development, sensor fusion and embedded hardware development. You contribute to exciting manifold projects, for example in the automotive industry, and work together with partners such as BMW, Continental, Intel, Schaeffler and KIT. ANavS provides free drinks, fruits and snacks, and a kicker table, and is located in Munich Laim with direct connection to U-Bahn, Bus and S-Bahn.

 
Your tasks:
  • Development of algorithms for:
    • Visual odometry and visual SLAM, to improve current positioning performance
    • 2D/3D mapping, to generate precise maps for localization
    • Semantic segmentation and object detection, for environment detection
  • Application of state-of-the-art computer vision, machine learning and deep learning techniques
  • Potentially: Development of AI-based integrity monitoring algorithms, including machine learning based spoofing detection to provide system status and feedback
  • Implementation of interfaces between the computer vision and sensor fusion framework
  • Selection, evaluation and integration of camera sensors
  • Camera calibration and time synchronization (e.g. hardware triggering)
  • Integration of additional sensors, such as IMU, wheel odometry or GNSS-based pose measurements into computer vision approaches
  • Development of real-time solutions and docker containers for our embedded platforms
  • Responsibility for funded and customer projects
  • Bringing developed solutions towards product stage
  • Close cooperation with the hardware, software and sensor fusion team
  • Patenting and publishing of developed approaches is encouraged
Your profile:
  • Preferable 2 years’ experience in computer vision and machine learning/ deep learning
  • Computer science degree (or comparable)
  • Well-founded knowledge in SLAM techniques for localization and mapping
  • Well-founded knowledge in C++, Python and deep learning frameworks, such as PyTorch and TensorFlow.
  • Practical experience in training, developing and evaluating deep neural networks for computer vision
  • Preferable experience with camera sensors, calibration, integration and time synchronization
  • Preferable experience in ROS/ROS2, CARLA or Gazebo simulators
  • High motivation to contribute to the technical development, ability to work independently and willing to adapt to flexible tasks
  • Team player with quick perception, reliability and accuracy
  • Good communication skills in English and German
Computer Vision Engineer - Focus on Lidar and Radar Systems (m/f/d)

Job advertisement Id FT008 online since 17/06/2021

ANavS – Advanced Navigation Solutions has three lines of business: precise positioning systems, precise mapping systems and snow monitoring systems. The core of the ANavS positioning systems is a modular and flexibly configurable sensor fusion of GNSS, inertial, odometry, UWB, camera and Lidar measurements. The innovative positioning algorithms were developed and patented by ANavS and include newest RTK/ PPP and AI methods. The main products of ANavS are the Multi-Sensor RTK module, the RTCM base station, and the Integrated Sensor Platform (ISP) with 3 integrated GNSS receivers, an IMU, a wheel odometry interface, 2 cameras, a 3D Lidar, an LTE module for the reception of RTK corrections, and a processor for the sensor fusion. The ANavS products have a large range of applications including the automotive, robotics, automation, maritime, railway, aerospace, agriculture and mining industries.

You will contribute to the integration of Lidar and Radar sensors into the existing Multi-GNSS/INS-based sensor fusion positioning system that already achieves centimeter-level accuracy in many scenarios. To bridge challenging GNSS scenarios such as tunnels, urban canyons or trees the fusion of Lidar/Radar-derived pose measurements from Lidar/Radar odometry or SLAM is desired. Besides the primary focus on precise positioning visual sensors are used for mapping and environment detection, which includes 3D point cloud and 2D road mapping, object detection and semantic segmentation.

Aside from computer vision another increasingly relevant topic is integrity monitoring, which may be part of your tasks. This includes monitoring of sensor data, intermediate system states and final positioning outputs to provide system status information and feedback to previous modules. GNSS spoofing detection is part of integrity monitoring in that manipulation of GNSS signals violates the system integrity. Computer vision can contribute to this task by providing alternative positioning solutions.

Your responsibility will cover the whole pipeline starting from the sensor’s integration to the algorithm development and up to the interface implementation for sensor fusion. The goal is the enhancement of our products, in particular our Integrated Sensor Platform (ISP), with real-time capable algorithms on embedded NVIDIA platforms. Accompanying this technical focus, you will take care of funded as well as customer projects that intersect with your technical expertise.

You will work in a small, flexible and growing team with flat hierarchies and expertise in computer vision, deep learning, software development, sensor fusion and embedded hardware development. You contribute to exciting manifold projects, for example in the automotive industry, and work together with partners such as BMW, Continental, Intel, Schaeffler and KIT. ANavS provides free drinks, fruits and snacks, and a kicker table, and is located in Munich Laim with direct connection to U-Bahn, Bus and S-Bahn.

 

Your tasks:
  • Development of algorithms for:
    • Lidar/Radar odometry and SLAM, to improve current positioning performance
    • 2D/3D mapping, to generate precise maps for localization
    • Semantic segmentation and object detection, for environment detection
  • Application of state-of-the-art computer vision, machine learning and deep learning techniques
  • Potentially: Development of AI-based integrity monitoring algorithms, including machine learning based spoofing detection to provide system status and feedback
  • Implementation of interfaces between the computer vision and sensor fusion framework
  • Selection, evaluation and integration of Lidar and Radar sensors
  • Lidar/Radar sensor calibration and time synchronization (e.g. hardware triggering)
  • Integration of additional sensors, such as IMU, wheel odometry or GNSS-based pose measurements into computer vision approaches
  • Development of real-time solutions and docker containers for our embedded platforms
  • Responsibility for funded and customer projects
  • Bringing developed solutions towards product stage
  • Close cooperation with the hardware, software and sensor fusion team
  • Patenting and publishing of developed approaches is encouraged
Your profile:
  • Preferable 2 years’ experience in computer vision and machine learning/ deep learning
  • Computer science degree (or comparable)
  • Well-founded knowledge in SLAM techniques for localization and mapping
  • Well-founded knowledge in C++, Python and deep learning frameworks, such as PyTorch and TensorFlow.
  • Practical experience in training, developing and evaluating deep neural networks for computer vision
  • Preferable experience with Lidar/Radar sensors, calibration, integration and time synchronization
  • Preferable experience in ROS/ROS2, CARLA or Gazebo simulators
  • High motivation to contribute to the technical development, ability to work independently and willing to adapt to flexible tasks
  • Team player with quick perception, reliability and accuracy
  • Good communication skills in English and German
Enhancing the ANavS Positioning System with Camera, Lidar or Radar sensors (m/f/d)

Job advertisement Id MA003 online since 01/2022

The ANavS real-time positioning system provides centimeter-accurate position information and is based on a sensor fusion of satellite signals (GNSS signals) and inertial measurements from an inertial measurement unit (IMU). In addition vehicle wheel data, barometer and range measurements can be integrated to improve the performance. Camera, Lidar and radar sensors on the one hand can be used to improve and robustify the localization performance too, for example in GNSS-denied areas, such as tunnels, and on the other hand enable the detection of the environment, which includes map creation and object detection.

Another important component to improve the safety of the positioning system is integrity monitoring, which makes sure that all sensor fusion data, such as sensor raw data or navigation solutions, are valid and consistent. This component may profit from camera, Lidar and radar sensor information as well, for example using cross-checks with other sensor or solution data. Scenarios range from simple sensor data malfunctioning to complex GNSS spoofing attacks – attacks in which satellite signals are manipulated to yield a wrong position information.

 
Your tasks:
  • Your topic will focus on one of the sensors camera, Lidar or radar with a focus on either object detection or localization and mapping depending on your background, interests and skills and our current priorities. Integrity monitoring is another possible thesis direction, which does not primarily focus on object detection or localization and mapping.
Your profile:
  • Master student in computer science, robotics, or related areas
  • Experience in computer vision, visual navigation, and preferably satellite-based navigation
  • Very good programming skills in C++ and Python and experience in software development
  • Very good analytical and debugging skills
  • Experience with deep learning and machine learning techniques and frameworks, such as PyTorch or Tensorflow
  • Theoretical knowledge and practical experience in Simultaneous Localization and Mapping (SLAM), or object detection methods would be ideal.
  • Experience with camera, Lidar or radar sensors would be ideal.
  • Strong motivation and ability to work independently
  • Team player
  • Good communication skills in English and preferably in German too
Support of Embedded Systems Team (m/f/d)

Job advertisement Id WS002 online since 05/2022

Support of the development of our embedded sensor systems for automotive and robotic applications with focus on architectural and functional software aspects.

 

Your Core Tasks:
  • Development of self-test and failsafe mechanisms
  • Hardening of implementation against security and safety threats
  • Comissioning of operating system (Linux based) for novel embedded systems
  • Work on boot process and tight Linux integration in custom hardware
Optional Tasks:
  • Driver, Firmware and further software development
  • Adaption of our systems to specific customer projects
  • Development of remote maintenance mechanisms
Core Qualifications:
  • Substantiated programming skills (C/C++)
  • Substantiated knowledge of operating systems (focus on Linux)
  • High motivation to gain new skills
Additional Welcome Qualifications:
  • Experience as software developer (C/C++ or Java)
  • Training as Computer Scientist or Electrical Engineer
  • Experience with kernel module development
  • Experience with good coding practices and design pattern
  • Familiarity with web / networking technology
  • Theoretical understanding and practical knowledge of cryptography
Support of our Computer Vision Team (m/f/d)

Job advertisement Id WS004 online since 01/2022

You will support our Computer Vision team in current tasks involving localization, mapping and object detection with camera, Lidar or radar data.

The general focus is on improving and enhancing the current GNSS/inertial-based positioning system, for example in case of degraded or missing satellite signals, or to add environment detection capabilities by map creation or object detection using camera, Lidar or radar sensors.

Your task may include all parts of the pipeline, starting at sensor evaluation, integration and calibration, collection or post-processing of datasets, development support for the data acquisition system, including the embedded processing platform (NVIDIA Jetson NX Dev. Kit) and docker containers (Linux-based) for example, evaluation of publicly available algorithms (e.g. Visual or Lidar SLAM), algorithm implementation, implementation of interfaces or tools in Python or C++. Other potential topics are related to integrity monitoring for sensor fusion or sensor fusion related implementation tasks. AI, machine learning and deep learning approaches are basis for several tasks, implementations are usually based on PyTorch or Tensorflow. Your task will be related to our product development or funded projects.

 
Your tasks:
  • Will be defined together with you depending on your background, interests and skills and our current priorities.
Your profile:
  • Student in computer science, robotics, or related areas
  • Experience in computer vision, visual navigation, and preferably satellite-based navigation
  • Very good programming skills in C++ and Python and experience in software development
  • Very good analytical and debugging skills
  • Experience with deep learning and machine learning techniques and frameworks, such as PyTorch or Tensorflow
  • Theoretical knowledge and practical experience in Simultaneous Localization and Mapping (SLAM), or object detection methods would be ideal.
  • Experience with camera, Lidar or radar sensors would be ideal.
  • Strong motivation and ability to work independently
  • Team player
  • Good communication skills in English and preferably in German too

Something sounds interesting to you?
We are looking forward to your application letter.
Go to application form