Performance:SURF

From BoofCV
Revision as of 19:37, 26 October 2011 by Peter (talk | contribs)
Jump to navigationJump to search

SURF Performance in BoofCV

The SURF descriptor is a state of the art image region descriptor that is scale, orientation, and illumination invariant. By using an integral image it can be computed efficiently across different scales. Inside of BoofCV SURF can be configured many different ways to create several variants. To ensure correctness and optimal performance a study has been performed comparing its performance against other open source libraries as well as itself.

It is shown that BoofCV provides two high quality variants of SURF that are comparable to or better than other popular SURF implementations in C/C++ or Java.

Tested Implementations:

Implementation Version Language Comment
BoofCV: SURF 10/2011 Java Fast but less accurate. See FactoryDescribeRegionPoint.surf()
BoofCV: MSURF 10/2011 Java Accurate but slower. See FactoryDescribeRegionPoint.msurf()
OpenSURF 27/05/2010 C++ http://www.chrisevansdev.com/computer-vision-opensurf.html
Reference 1.0.9 C++ http://www.vision.ee.ethz.ch/~surf/

Summary Results

overall_stability.gif overall_describe_speed.gif
Higher is better. Lower is better.

Overall performance for each library is summarized in the plots above. Stability performance was computed by computing the sum of all correct associations through out the entire image data set. Each library was then divided by the best performing library to create a relative plot.

Descriptor Stability

Tests were performed using standardized test images from [1], which have known transformations. Because the transformation between images is known this allows the true associations to be known. Stability was measured based upon the number of correct associations between two images in the dataset. The testing procedure is summarized below:

  1. For each image, detect features (scale and location) using the fast Hessian detector in BoofCV.
  2. For each image, compute a feature description for all found features.
  3. In each image sequence, associate features in the first image to the Nth image, where N > 1.
    • Association is done by minimizing Euclidean error
    • Validation is done using reverse association. E.g. This association must be the optimal association going from frame 1 to N and N to 1.
  4. Compute the number of correct associations.
    • An association is correct if it is within 3 pixels of the true location.

Since the transformation is known between images the true location could have been used. However, in reality features will not lie at the exact point and a descriptor needs to be tolerant to this type of errors. Thus this is a more accurate measure of the description's strength.

Configuration: All libraries were configured to detect SURF-64 features as defined in the original SURF paper.


Stability Results

stability_bike.gif stability_boat.gif
stability_graf.gif] stability_leuven.gif
stability_ubc.gif stability_trees.gif
stability_wall.gif stability_bark.gif