Difference between revisions of "Performance:SURF"

From BoofCV
Jump to navigationJump to search
m
m
Line 6: Line 6:


Tested Implementations:
Tested Implementations:
* BoofCV: SURF
 
** Fast but less accurate.
{| cellpadding="5" cellspacing="0" border="1"
** See FactoryDescribeRegionPoint.surf()
! Implementation
* BoofCV: MSURF
! Version
** Accurate but slower.
! Comment
** See FactoryDescribeRegionPoint.msurf()
|-
* OpenSURF
| BoofCV: SURF || 10/2011 || Fast but less accurate. See FactoryDescribeRegionPoint.surf()  
** Build 27/05/2010  
|-
** http://www.chrisevansdev.com/computer-vision-opensurf.html
| BoofCV: MSURF || 10/2011 || Accurate but slower. See FactoryDescribeRegionPoint.msurf()  
* Reference Implementation
|-
** 1.0.9
| OpenSURF || 27/05/2010 || http://www.chrisevansdev.com/computer-vision-opensurf.html
** http://www.vision.ee.ethz.ch/~surf/
|-
| Reference || 1.0.9 || http://www.vision.ee.ethz.ch/~surf/
|}


= Summary Results =
= Summary Results =

Revision as of 07:06, 25 October 2011

SURF Performance in BoofCV

The SURF descriptor is a state of the art image region descriptor that is scale, orientation, and illumination invariant. By using an integral image it can be computed efficiently across different scales. Inside of BoofCV SURF can be configured many different ways to create several variants. To ensure correctness and optimal performance a study has been performed comparing its performance against other open source libraries as well as itself.

It is shown that BoofCV provides two high quality variants of SURF that are comparable to or better than other popular SURF implementations.

Tested Implementations:

Implementation Version Comment
BoofCV: SURF 10/2011 Fast but less accurate. See FactoryDescribeRegionPoint.surf()
BoofCV: MSURF 10/2011 Accurate but slower. See FactoryDescribeRegionPoint.msurf()
OpenSURF 27/05/2010 http://www.chrisevansdev.com/computer-vision-opensurf.html
Reference 1.0.9 http://www.vision.ee.ethz.ch/~surf/

Summary Results

File:SURF performance-overall stability.gif
Summary of Stability and Runtime Performance

Overall performance for each library is summarized in the plots above. Stability performance was computed by computing the sum of all correct associations through out the entire image data set. Each library was then divided by the best performing library to create a relative plot.

Descriptor Stability

Tests were performed using standardized test images from [1], which have known transformations. Because the transformation between images is known this allows the true associations to be known. Stability was measured based upon the number of correct associations between two images in the dataset. The testing procedure is summarized below:

  1. For each image, detect features (scale and location) using the fast Hessian detector in BoofCV.
  2. For each image, compute a feature description for all found features.
  3. In each image sequence, associate features in the first image to the Nth image, where N > 1.
    • Association is done by minimizing Euclidean error
    • Validation is done using reverse association. E.g. This association must be the optimal association going from frame 1 to N and N to 1.
  4. Compute the number of correct associations.
    • An association is correct if it is within 3 pixels of the true location.

Since the transformation is known between images the true location could have been used. However, in reality features will not lie at the exact point and a descriptor needs to be tolerant to this type of errors. Thus this is a more accurate measure of the description's strength.

Configuration: All libraries were configured to detect SURF-64 features as defined in the original SURF paper.


Stability Results

File:SURF performance-stability bike.gif File:SURF performance-stability boat.gif
File:SURF performance-stability graf.gif File:SURF performance-stability leuven.gif
File:SURF performance-stability ubc.gif File:SURF performance-stability trees.gif
File:SURF performance-stability wall.gif File:SURF performance-stability bark.gif