Abstract
Keypoint detection and description is a commonly used building block in computer vision systems particularly for robotics and autonomous driving. However, the majority of techniques to date have focused on standard cameras with little consideration given to fisheye cameras which are commonly used in urban driving and automated parking. In this paper, we propose a novel training and evaluation pipeline for fisheye images. We make use of SuperPoint as our baseline which is a self-supervised keypoint detector and descriptor that has achieved state-of-the-art results on homography estimation. We introduce a fisheye adaptation pipeline to enable training on undistorted fisheye images. We evaluate the performance on the HPatches benchmark, and, by introducing a fisheye based evaluation method for detection repeatability and descriptor matching correctness, on the Oxford RobotCar dataset.
Original language | English |
---|---|
Pages (from-to) | 340-347 |
Number of pages | 8 |
Journal | Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Volume | 4 |
DOIs | |
Publication status | Published - 2022 |
Event | 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2022 - Virtual, Online Duration: 6 Feb 2022 → 8 Feb 2022 |
Keywords
- Deep Learning
- Feature Description
- Feature Detection
- Fisheye Images
- Interest Points
- Keypoints