Detecting, identifying, and recognizing salient regions or feature points
in images is a very important and fundamental problem to the computer vision
and robotics community. Tasks like landmark detection and visual odometry,
but also object recognition benefit from stable and repeatable salient features
that are invariant to a variety of effects like rotation, scale changes, view point
changes, noise, or change in illumination conditions. Recently, two promising new
approaches, SIFT and SURF, have been published. In this paper we compare and
evaluate how well different available implementations of SIFT and SURF perform
in terms of invariancy and runtime efficiency.