Difference between revisions of "Performance:SURF"

From BoofCV
Jump to navigationJump to search
(cleaned up text a little bit)
m
 
(5 intermediate revisions by the same user not shown)
Line 3: Line 3:
[http://en.wikipedia.org/wiki/SURF Speeded Up Robust Feature (SURF)] is a state-of-the-art image region descriptor and detector that is invariant with regard to scale, orientation, and illumination. By using an [http://en.wikipedia.org/wiki/Summed_area_table integral image], the descriptor can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist.  The following study compares several different libraries to determine relative stability and runtime performance.
[http://en.wikipedia.org/wiki/SURF Speeded Up Robust Feature (SURF)] is a state-of-the-art image region descriptor and detector that is invariant with regard to scale, orientation, and illumination. By using an [http://en.wikipedia.org/wiki/Summed_area_table integral image], the descriptor can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist.  The following study compares several different libraries to determine relative stability and runtime performance.


''Results Last Updated: February 2, 2012''
Results Last Updated: February 2, 2012


For a more detailed discussion of the algorithmic difference between these implementations and clarifications of ambiguities found in the original SURF paper, see the draft paper below:
''For a more detailed discussion of the algorithmic difference between these implementations, see the paper below''


Paper: [http://arxiv.org/abs/1202.0492 Peter Abeles, "Resolving Implementation Ambiguity and Improving SURF" 2012 (pre-print draft)]
[http://boofcv.org/techreports/2013_Abeles_SpeedingUpSURF_ISVC.pdf Peter Abeles, "Speeding Up SURF" 9th International Symposium on Visual Computing, 2013]
 
<pre>
@proceedings{abeles2013speeding,
  title={Speeding Up SURF},
  author={Abeles, Peter},
  booktitle={Advances in Visual Computing},
  pages={454--464},
  year={2013},
  publisher={Springer}
}
</pre>


Tested Implementations:
Tested Implementations:
Line 35: Line 46:
|}
|}


[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP.  Only a single thread was used in this test.
[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP.  Only a single thread was used in this test. [http://groups.google.com/group/boofcv/browse_thread/thread/60246015888791e9 Click here for a discussion on this issue.]


Benchmark Source Code:
Benchmark Source Code:
Line 63: Line 74:
For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology follows.  Figure 1 shows how fast each library could detect and describe interest points.  Figures 2 and 3 show a summary of each implementation's relative stability for describing and detecting across a standard set of test images.
For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology follows.  Figure 1 shows how fast each library could detect and describe interest points.  Figures 2 and 3 show a summary of each implementation's relative stability for describing and detecting across a standard set of test images.


For a discussion of these results and an understanding of why different implementations performed better than others please consult the pre-print paper above.  In summary, the libraries which had better stability did a better job of addressing the smoothness rule.  Critical parts which were major differentiators are how the gradient isinterpolated during descriptor calculation and how the image borders are handled.  The reference library runs slowly and the binary is no longer compatible with the latest releases of Linux.  If an alternative implementation is used to represent SURF's performance it is recommended that either Pan-o-Matic, or BoofCV be used.  The other libraries exhibit a lack of stability in either their descriptor calculation and/or interest point detection.  BoofCV is able to achieve its speed despite being a Java library through several small algorithmic changes.
For a discussion of these results and an understanding of why different implementations performed better than others, see the pre-print paper above.  In summary, libraries with better stability did a better job of addressing the smoothness rule.  Major differentiators between libraries involve how the gradient is interpolated during descriptor calculation and how the image border is handled.  The reference library runs slowly and the binary is no longer compatible with the latest releases of Linux.  If an alternative implementation is used to represent SURF's performance, it is recommended that Pan-o-Matic, or BoofCV be used.  Other libraries exhibited significantly worse stability in the descriptor and/or detection calculations.  BoofCV is able to achieve its speed despite being a Java library through several small algorithmic changes.


<social_buttons />
<social_buttons />

Latest revision as of 12:39, 25 September 2014

Comparison of SURF implementations

Speeded Up Robust Feature (SURF) is a state-of-the-art image region descriptor and detector that is invariant with regard to scale, orientation, and illumination. By using an integral image, the descriptor can be computed efficiently across different scales. In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist. The following study compares several different libraries to determine relative stability and runtime performance.

Results Last Updated: February 2, 2012

For a more detailed discussion of the algorithmic difference between these implementations, see the paper below

Peter Abeles, "Speeding Up SURF" 9th International Symposium on Visual Computing, 2013

@proceedings{abeles2013speeding,
  title={Speeding Up SURF},
  author={Abeles, Peter},
  booktitle={Advances in Visual Computing},
  pages={454--464},
  year={2013},
  publisher={Springer}
}

Tested Implementations:

Implementation Version Language Threaded Comment
BoofCV-F 0.5 Java No http://boofcv.org/
Fast but less accurate. See FactoryDescribeRegionPoint.surf()
BoofCV-M 0.5 Java No http://boofcv.org/
Accurate but slower. See FactoryDescribeRegionPoint.surfm()
OpenSURF 27/05/2010 C++ No http://www.chrisevansdev.com/computer-vision-opensurf.html
Reference 1.0.9 C++ No http://www.vision.ee.ethz.ch/~surf/
JOpenSURF SVN r24 Java No http://code.google.com/p/jopensurf/
JavaSURF SVN r4 Java No http://code.google.com/p/javasurf/
OpenCV 2.3.1 SVN r6879 C++ No [1] http://opencv.willowgarage.com/wiki/
Pan-o-Matic 0.9.4 C++ No http://aorlinsk2.free.fr/panomatic/?p=home

[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP. Only a single thread was used in this test. Click here for a discussion on this issue.

Benchmark Source Code:

Various Info:

Conclusions

overall_all_speed.gif
Figure 1: Runtime performance comparison for detecting and describing. Single 850x680 pixel image and 2000 features. Lower is better.
overall_describe_stability.gif overall_detect_stability.gif
Figure 2: Overall region descriptor stability comparison. Scores are relative to the best library. Higher is better. Figure 3: Overall stability of interest point detection. Scores are relative to the best library. Higher is better.

For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology follows. Figure 1 shows how fast each library could detect and describe interest points. Figures 2 and 3 show a summary of each implementation's relative stability for describing and detecting across a standard set of test images.

For a discussion of these results and an understanding of why different implementations performed better than others, see the pre-print paper above. In summary, libraries with better stability did a better job of addressing the smoothness rule. Major differentiators between libraries involve how the gradient is interpolated during descriptor calculation and how the image border is handled. The reference library runs slowly and the binary is no longer compatible with the latest releases of Linux. If an alternative implementation is used to represent SURF's performance, it is recommended that Pan-o-Matic, or BoofCV be used. Other libraries exhibited significantly worse stability in the descriptor and/or detection calculations. BoofCV is able to achieve its speed despite being a Java library through several small algorithmic changes.

submit to reddit     

Descriptor Stability

The stability benchmark was performed using standardized test images from [1], which have known transformations. Stability was measured based on the number of correct associations between two images in the data set. The testing procedure for each library is summarized below:

  1. For each image, detect features (scale and location) using the fast Hessian detector in the reference library.
    • Save results to a file and use the same file for all libraries.
  2. For each image, compute a feature description (including orientation) for all features found.
  3. In each image sequence, associate features in the first image to the Nth image, where N > 1.
    • Association is done by minimizing Euclidean error.
    • Validation is done using reverse association, i.e. the association must be the optimal association going from frame 1 to N and N to 1.
  4. Compute the number of correct associations.
    • An association is correct if it is within 3 pixels of the true location.

Since the transformation is known between images, the true location could have been used. However, in reality features will not lie at the exact point, and a descriptor needs to be tolerant of this type of error. Thus, this is a more accurate measure of the descriptor's strength in real-world data.

Stability results shown in Figure 2 display the relative stability across all test images for each library. Relative stability is computed by summing up the total percent of correctly associated features across the whole test data set, and then choosing the library with the best performance. The relative stability is computed by dividing each library's score by the best performer's score.

Configuration: All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper. JavaSURF does not support orientation estimation.


Stability Results

stability_bike.gif stability_boat.gif
stability_graf.gif] stability_leuven.gif
stability_ubc.gif stability_trees.gif
stability_wall.gif stability_bark.gif

Detection Stability

SURF feature points are typically detected using the fast Hessian detector described in the SURF paper. Interest point detection stability refers to how well an interest's point location and scale is detect after the image has undergone a transformation. A perfect detector would detect a point in the same location as it was in the original image after applying the image transform.

Performance was measured based upon the fraction of the total features detected in the first image which had a corresponding interest point detected in the later images. Two interest points were said to correspond if their location and scales were both within tolerance. The expected location and scale was computed using the known transformations. Scale was computed by sampling four evenly spaced points one pixel away from an interest point in the first frame. The known transform was then applied to each point and the change in distance measured. The expected scale was set to the average distance each point was from transformed interest point location.

When compiling the results it was noticed that libraries which detected more features always scored better when using the just mentioned metric. It was also noted that a poorly behaving detector could score highly in the just mentioned metric. For example if every pixel was marked as an interest point or if densely packed clusters were returned as detected features. One of the libraries even had a known bug were false positives would be returned near the image border.

To compensate for these issues, only interest points were considered if they were an unambiguous match and the detection threshold was tuned to return the same number of features for at least one image. A match was declared as ambiguous if more than one interest point was found to be close to the expected interest point location.

Library Configurations:

  • Tune detection threshold to detect about 2000 features in graffiti image 1
  • Only consider a 3x3 non-max region
  • Octaves: 4
  • Scales: 4
  • Base Size: 9
  • Initial Pixel Skip: 1

Performance Calculation:

  1. Detect interest points in all images
  2. Transform interest points in image 1 to image N
  3. For each interest point in image 1:
    1. Find all interest points in image N within 1.5 pixels and 25% of the expected scale.
    2. If the expected pixel location is outside the image, ignore.
    3. If the number of matches is more than one, ignore.
    4. If the number of matches is one, mark the interest point as a correct detection.
  4. Count number of valid interest points which have zero or one matches.
  5. Detection metric for each image in the sequence is the total number of correct detections divided by the number of valid interest points.

The summary chart is generated by summing up the detection metric for each library across all the images and then dividing each library's score by the library with the best score.

Runtime Speed

Each library's speed in describing and detecting was benchmarked together (Figure 2). Each test was performed 11 times with the median time being shown. Java libraries tended to exhibit more variation than native libraries, although all libraries showed a significant amount of variation from trial to trial.

Only image processing time essential to SURF was measured, not image loading time. Time to convert the gray scale image into an integral image was measured, but not the time to convert the image to grayscale. Even if these image processing tasks were included, they would only account for a small fraction of the overall computation time. Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.

Testing Procedure for Describe:

  1. Kill all extraneous processes.
  2. Load feature location and size from file.
  3. Detect interest points and compute descriptors while recording elapsed time.
  4. Compute elapsed time 10 times and output best result.
  5. Run the whole experiment 11 times for each library and record the median time.

Test Computer:

  • Ubuntu 10.10 64bit
  • Quadcore Q6600 2.4 GHz
  • Memory 8194 GB
  • g++ 4.4.5
  • Java(TM) SE Runtime Environment (build 1.6.0_26-b03)

Compiler and JRE Configuration:

  • All native libraries were compiled with -O3
  • Java applications were run with no special flags
    • Java has defaulted to "-server" on desktop computers since Java 6, see this technote.

Algorithmic Setup:

  • input image was boat/img1
  • Impossible to configure libraries to detect exact same features
    • Adjusted detection threshold to top out at around 2000 features
  • Octaves: 4
  • Scales: 4
  • Base Size: 9
  • Initial Pixel Skip: 1


Comments: Both BoofCV-F and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.

Change History

  1. February 2, 2012
    • Updated results
    • Changed runtime performance from being the min time to the median time
    • OpenCV no longer detects its own features for descriptor stability. Slight improvement in results
    • Descriptor stability benchmark uses interest points detected from the reference library now
    • Pan-o-Matic was not being compiled with -O3 before
  2. December 5, 2011
    • Forked benchmark source code into its own project and updated links
  3. November 14, 2011
    • Added detect stability results
  4. November 12, 2011
    • Added Pan-o-Matic to overall results