Tightly coupled position determination with Visual-Odometry, GNSS, Wheel-Odometry and IMU

Integration of Visual Positioning into GNSS/ Odometry/ IMU tightly coupled positioning

Fig. 1 shows a comparison of the GNSS/ wheel odometry/ IMU tightly coupled RTK positioning with and without integrated visual odometry (monocular camera). The hardware consists of the ANavS MSRTK module. The trajectory starts with a rectangular, repetitive pattern at an open field. The initial convergence of the RTK float solution is also shown. The position estimates with and without visual positioning are well-aligned. This indicates the correctness of positioning with and without visual odometry. After the rectangular pattern, the robot drove towards trees and bushes (upper part of trajectory) to test the positioning performance in more challenging conditions. One can observe a certain deviation between the position trajectories with and without visual odometry. The benefit of the visual odometry becomes apparent at the RTK refixing after passing the sections with trees and bushes: The position correction is only 20 cm with visual odometry compared to 30 cm without visual odometry. The diagram also shows three highlighted locations. The respective camera images are provided in Fig. 2 and 3.

Fig. 1: Analysis of Multi-GNSS/ IMU/ wheel odometry tightly coupled RTK with and without integrated visual odometry.

Fig. 2 includes camera images with ∼ 20 patch features on the grass at Pinakothek, Munich, with trees in the background. The illumination is higher in the left image than in the right image. The multilevel  patch features are determined by ROVIO, and represented by squares. Green color denotes successfully tracked patch features and red color denotes rejected patches. The final (i.e. after iterative convergence) location of each landmark is shown with a small red dot surrounded by 4 green or red dots. The surrounding locations are checked for higher innovation residuals to keep (green) or reject (red) the patch features. The estimated uncertainty of each landmark location is shown by yellow ellipses. The largest uncertainty has the patch feature in the upper right part of the left image, where the image is very dark. One can observe that almost all patch features are in green, which indicates that grass patches can be tracked well.

Fig. 2: Camera images with ∼ 20 patch features on the grass at Pinakothek,Munich, with trees in the background.

Fig. 3 shows the camera image at the third highlighted location in Fig. 2. The patch features are again well-distributed over the camera image. The landmark locations are shown with red dots. The  consistency of each patch feature/ landmark location is checked at the surrounding dots. The checks passed successfully for all patch features except for the patch feature in the upper right part close to the centre, where two out of four consistency checks failed. Nevertheless, the patch feature is still used since two consistency checks confirmed the patch.

Fig. 3: Camera image with tracked patch features at Pinakothek,Munich, Germany.

 

Conclusion

The autonomous driving of robots requires a precise and reliable positioning. In this paper, we analyzed the sensor fusion of GNSS-RTK, INS, odometry and visual positioning. The focus was put on visual positioning, and their integration into the sensor fusion. The paper provided a quantitative performance analysis with real measurements, and showed that centimeter-level positioning accuracy is feasible with ANavS MSRTK Module and low-cost monocular cameras.