Difference between revisions of "Performance:SURF"

From BoofCV
Jump to navigationJump to search
m
m
Line 2: Line 2:


The [http://en.wikipedia.org/wiki/SURF SURF] descriptor is a state-of-the-art image region descriptor that is invariant with regard to scale, orientation, and illumination. By using an [http://en.wikipedia.org/wiki/Summed_area_table integral image], the descriptor can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist.  The following study compares several different libraries to determine relative stability and runtime performance.
The [http://en.wikipedia.org/wiki/SURF SURF] descriptor is a state-of-the-art image region descriptor that is invariant with regard to scale, orientation, and illumination. By using an [http://en.wikipedia.org/wiki/Summed_area_table integral image], the descriptor can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist.  The following study compares several different libraries to determine relative stability and runtime performance.


Tested Implementations:
Tested Implementations:
Line 144: Line 145:


'''Comments:''' OpenCV was omitted from individual detect and describe benchmarks because the two tasks could not be decoupled in the same way as the other libraries.  Both BoofCV and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.
'''Comments:''' OpenCV was omitted from individual detect and describe benchmarks because the two tasks could not be decoupled in the same way as the other libraries.  Both BoofCV and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.
= Change History =
# November 12, 2011
#* Added Pan-o-Matic to overall results

Revision as of 16:16, 12 November 2011

Comparison of SURF implementations

The SURF descriptor is a state-of-the-art image region descriptor that is invariant with regard to scale, orientation, and illumination. By using an integral image, the descriptor can be computed efficiently across different scales. In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist. The following study compares several different libraries to determine relative stability and runtime performance.


Tested Implementations:

Implementation Version Language Threaded Comment
BoofCV alpha v0.1 Java No http://www.boofcv.org/
Fast but less accurate. See FactoryDescribeRegionPoint.surf()
BoofCV-M alpha v0.1 Java No http://www.boofcv.org/
Accurate but slower. See FactoryDescribeRegionPoint.surfm()
OpenSURF 27/05/2010 C++ No http://www.chrisevansdev.com/computer-vision-opensurf.html
Reference 1.0.9 C++ No http://www.vision.ee.ethz.ch/~surf/
JOpenSURF SVN r24 Java No http://code.google.com/p/jopensurf/
JavaSURF SVN r4 Java No http://code.google.com/p/javasurf/
OpenCV 2.3.1 SVN r6879 C++ No [1] http://opencv.willowgarage.com/wiki/
Pan-o-Matic 0.9.4 C++ No http://aorlinsk2.free.fr/panomatic/?p=home

[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP. In this test that was not done so it is single threaded.

Benchmark Source Code:

Questions or Comments about this study?
Post those here: https://groups.google.com/group/boofcv

Conclusions

overall_stability.gif overall_all_speed.gif
Figure 1: Stability Comparison using precomputed interest points. Higher is better. Figure 2: Runtime performance comparison for detecting and describing. Single 850x680 pixel image and 2000 features. Lower is better.

For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology follows. Figure 1 shows a summary of each implementation's relative stability across a standard set of test images. Figures 2 shows how fast each library could select and describe interest points.

One reason for JavaSURF's poor stability is that it only implements an upright version of SURF, so rotated images defeat the descriptor. Not computing orientation helps JavaSURF on the runtime benchmark, because it has fewer computations to perform. JOpenSURF is a straightforward port of the OpenSURF library to Java and shows comparable stability with the expected hit on runtime performance. JOpenSURF, OpenSURF and BoofCV-M all compute an enhanced version of the SURF descriptor, while BoofCV and OpenCV descriptors are closer to the SURF paper. I suspect that the descriptor computed by the reference library is also an improvement over what was presented in the SURF paper, but the source code is closed, so this theory cannot be directly verified.

OpenCV is a bit of an oddball library as far as SURF is concerned. It did not provide an interface that would allow it to be tested in the same manner as the other libraries, and comments in the code indicated that parts of it are multi-threaded using IPP. However, OpenCV was not compiled with IPP turned on so only a single thread was used. Because of these issues, OpenCV's own interest points were used instead of the precomputed ones when testing stability.

BoofCV's speed is due to algorithmic changes and code optimization. A new technique for orientation estimation was used in "BoofCV" but not "BoofCV-M", accounting for a large speed boost at the cost of a slight loss in stability. Special optimized code was used for features entirely inside the image, counter acting Java's poor loop unrolling.

Descriptor Stability

The stability benchmark was performed using standardized test images from [1], which have known transformations. Stability was measured based on the number of correct associations between two images in the data set. The testing procedure for each library is summarized below:

  1. For each image, detect features (scale and location) using the fast Hessian detector in BoofCV.
    • Save results to a file and use the same file for all libraries.
  2. For each image, compute a feature description (including orientation) for all features found.
  3. In each image sequence, associate features in the first image to the Nth image, where N > 1.
    • Association is done by minimizing Euclidean error.
    • Validation is done using reverse association, i.e. the association must be the optimal association going from frame 1 to N and N to 1.
  4. Compute the number of correct associations.
    • An association is correct if it is within 3 pixels of the true location.

Since the transformation is known between images, the true location could have been used. However, in reality features will not lie at the exact point, and a descriptor needs to be tolerant of this type of error. Thus, this is a more accurate measure of the descriptor's strength in real-world data.

Stability results shown in Figure 1 display the relative stability across all test images for each library. Relative stability is computed by summing up the total percent of correctly associated features across the whole test data set, and then choosing the library with the best performance. The relative stability is computed by dividing each library's score by the best performer's score.

Configuration: All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper. JavaSURF does not support orientation estimation. OpenCV forces orientation to be estimated inside the feature detector; therefore it was decided that the lesser evil would be to let OpenCV detect its own features. OpenCV's threshold was adjusted so that it detected about the same number of features.


Stability Results

stability_bike.gif stability_boat.gif
stability_graf.gif] stability_leuven.gif
stability_ubc.gif stability_trees.gif
stability_wall.gif stability_bark.gif

Runtime Speed

overall_describe_speed.gif overall_detect_speed.gif
Figure 3: Describe runtime performance using 6415 precomputed interest points. Lower is better. Figure 4: Detect runtime performance. 850x680 image. Lower is better.

Each library's speed in describing and detecting was individually benchmarked (Figures 3,4) and benchmarked together (Figure 2). Each test was performed five times, but only the best time is shown. Java libraries tended to exhibit more variation than native libraries, although all libraries showed a significant amount of variation from trial to trial.

Only image processing time essential to SURF was measured, not image loading time. Time to convert the gray scale image into an integral image was measured, but not the time to convert the image to grayscale. Even if these image processing tasks were included, they would only account for a small fraction of the overall computation time. Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.

Testing Procedure for Describe:

  1. Kill all extraneous processes.
  2. Load feature location and size from file.
  3. Compute descriptors (including orientation) for each feature while recording elapsed time.
  4. Compute elapsed time 10 times and output best result.
  5. Run the whole experiment 5 times for each library and record the best time.

Similar procedures were follows for detect and the combined benchmark.

Test Computer:

  • Ubuntu 10.10 64bit
  • Quadcore Q6600 2.4 GHz
  • Memory 8194 GB
  • g++ 4.4.5
  • Java(TM) SE Runtime Environment (build 1.6.0_26-b03)

Compiler and JRE Configuration:

  • All native libraries were compiled with -O3
  • Java applications were run with no special flags

Describe Specific Setup:

  • input image was boat/img1
  • Fast Hessian features from BoofCV
    • 6415 Total

Detect Specific Setup:

  • Impossible to configure libraries to detect exact same features
    • Adjusted detection threshold to top out at around 2000 features
  • Octaves: 4
  • Scales: 4
  • Base Size: 9
  • Initial Pixel Skip: 1

All (describe and detect) Specific Setup:

  • Same as detect setup

Comments: OpenCV was omitted from individual detect and describe benchmarks because the two tasks could not be decoupled in the same way as the other libraries. Both BoofCV and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.

Change History

  1. November 12, 2011
    • Added Pan-o-Matic to overall results