Difference between revisions of "Performance:SURF"

From BoofCV
Jump to navigationJump to search
m
m
 
(47 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Comparison of SURF implementations =
= Comparison of SURF implementations =


The [http://en.wikipedia.org/wiki/SURF SURF] descriptor is a state of the art image region descriptor that is scale, orientation, and illumination invariant. By using an integral image it can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently used feature descriptors, but it is not a trivial algorithm to implement and several different implementations exist.  The following study compares several different libraries against each other to determine their relative stability and runtime performance.
[http://en.wikipedia.org/wiki/SURF Speeded Up Robust Feature (SURF)] is a state-of-the-art image region descriptor and detector that is invariant with regard to scale, orientation, and illumination. By using an [http://en.wikipedia.org/wiki/Summed_area_table integral image], the descriptor can be computed efficiently across different scales.  In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist.  The following study compares several different libraries to determine relative stability and runtime performance.
 
Results Last Updated: February 2, 2012
 
''For a more detailed discussion of the algorithmic difference between these implementations, see the paper below''
 
[http://boofcv.org/techreports/2013_Abeles_SpeedingUpSURF_ISVC.pdf Peter Abeles, "Speeding Up SURF" 9th International Symposium on Visual Computing, 2013]
 
<pre>
@proceedings{abeles2013speeding,
  title={Speeding Up SURF},
  author={Abeles, Peter},
  booktitle={Advances in Visual Computing},
  pages={454--464},
  year={2013},
  publisher={Springer}
}
</pre>


Tested Implementations:
Tested Implementations:
Line 9: Line 26:
! Version
! Version
! Language  
! Language  
! Threaded
! Comment
! Comment
|-
|-
| BoofCV: SURF || 10/2011 || Java || Fast but less accurate. See FactoryDescribeRegionPoint.surf()  
| BoofCV-F || 0.5 || Java || No || http://boofcv.org/ <br> Fast but less accurate. See FactoryDescribeRegionPoint.surf()
|-
|-
| BoofCV: MSURF || 10/2011 || Java || Accurate but slower. See FactoryDescribeRegionPoint.msurf()  
| BoofCV-M || 0.5 || Java || No || http://boofcv.org/ <br> Accurate but slower. See FactoryDescribeRegionPoint.surfm()
|-
|-
| OpenSURF || 27/05/2010 || C++ || http://www.chrisevansdev.com/computer-vision-opensurf.html
| OpenSURF || 27/05/2010 || C++ || No || http://www.chrisevansdev.com/computer-vision-opensurf.html
|-
|-
| Reference || 1.0.9 || C++ || http://www.vision.ee.ethz.ch/~surf/
| Reference || 1.0.9 || C++ || No || http://www.vision.ee.ethz.ch/~surf/
|-
|-
| JOpenSURF || SVN r24 || Java || http://code.google.com/p/jopensurf/
| JOpenSURF || SVN r24 || Java || No || http://code.google.com/p/jopensurf/
|-
|-
| JavaSURF || SVN r4 || Java || http://code.google.com/p/javasurf/
| JavaSURF || SVN r4 || Java || No || http://code.google.com/p/javasurf/
|-
|-
| OpenCV || 2.3.1 SVN r6879 || C++ || http://opencv.willowgarage.com/wiki/
| OpenCV || 2.3.1 SVN r6879 || C++ || No [1] || http://opencv.willowgarage.com/wiki/
|-
| Pan-o-Matic || 0.9.4 || C++ || No || http://aorlinsk2.free.fr/panomatic/?p=home
|}
|}
[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP.  Only a single thread was used in this test.  [http://groups.google.com/group/boofcv/browse_thread/thread/60246015888791e9 Click here for a discussion on this issue.]


Benchmark Source Code:
Benchmark Source Code:
* [https://github.com/lessthanoptimal/BoofCV/tree/master/evaluation/benchmark BoofCV benchmark]
* [https://github.com/lessthanoptimal/SURFPerformance SURFPerformance]
* [https://github.com/lessthanoptimal/BoofCV/tree/master/evaluation/misc External libraries]
 
Various Info:
* Study was performed by Peter Abeles
* Questions or Comments?
** Post those here: https://groups.google.com/group/boofcv


= Conclusions =
= Conclusions =
 
<center>
{|
{|
| http://www.boofcv.org/notwiki/images/benchmark_surf/overall_describe_speed.gif || http://www.boofcv.org/notwiki/images/benchmark_surf/overall_detect_speed.gif
| width="300" | http://www.boofcv.org/notwiki/images/benchmark_surf/overall_all_speed.gif
|-
|-
! Lower is better. !! Lower is better.
! Figure 1: Runtime performance comparison for detecting and describing. Single 850x680 pixel image and 2000 features. Lower is better.
|-
|}
|colspan="2"| <center>http://www.boofcv.org/notwiki/images/benchmark_surf/overall_stability.gif </center>
</center>
<center>
{|
| width="300" | http://www.boofcv.org/notwiki/images/benchmark_surf/overall_describe_stability.gif ||  width="300"| http://www.boofcv.org/notwiki/images/benchmark_surf/overall_detect_stability.gif
|-
|-
!colspan="2"| Higher is better.
! Figure 2: Overall region descriptor stability comparison.  Scores are relative to the best library.  Higher is better. !! Figure 3: Overall stability of interest point detection. Scores are relative to the best library.  Higher is better.  
|}
|}
</center>


For sake of those with no attention span, the summary results are posted first and a discussion of testing methodologies following belowThe top two plots show how fast each library is at detecting and describing featuresDetection is when the location and scale of interest points are detected inside the image using each library's implementation of the Fast Hessian detector.  A feature is described by computing the SURF-64 description.  The bottom most plot shows a summary of the descriptors relative stability across a standard set of test images.  The description stability metric was computed by finding the sum of all correct associations through out the entire image data set then dividing the number by the best result.
For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology followsFigure 1 shows how fast each library could detect and describe interest pointsFigures 2 and 3 show a summary of each implementation's relative stability for describing and detecting across a standard set of test images.


One reason for JavaSURF's poor stability performance is that is only implements an upright version of SURF, so images with any rotation cause it to fail.  Not computing the orientation also helps JavaSURF on the description runtime benchmark because it has fewer computations to performJOpenSURF is a straight forward port of the OpenSURF library to Java and shows comparable stability with the expected hit on runtime performanceJOpenSURF, OpenSURF and BoofCV-M compute an enhanced version of the SURF descriptor, while the BoofCV descriptor is closer to the SURF paper with some improvementsI suspect that the descriptor computed by the reference library is also an improvement over what was presented in the SURF paper, but source code is closed so this theory cannot be directly verifiedThe good performance of BoofCV library is primarily because of tweaks to the algorithm and better optimizations, not due to any sort of Java magic.  A good C++ port would run even faster.
For a discussion of these results and an understanding of why different implementations performed better than others, see the pre-print paper aboveIn summary, libraries with better stability did a better job of addressing the smoothness ruleMajor differentiators between libraries involve how the gradient is interpolated during descriptor calculation and how the image border is handledThe reference library runs slowly and the binary is no longer compatible with the latest releases of Linux.  If an alternative implementation is used to represent SURF's performance, it is recommended that Pan-o-Matic, or BoofCV be used.  Other libraries exhibited significantly worse stability in the descriptor and/or detection calculations.  BoofCV is able to achieve its speed despite being a Java library through several small algorithmic changes.


OpenCV is a bit of an odd ball library as far as SURF is concerned.  It did not provide an interface such that a key point location and scale could be provided alone to the descriptor, orientation had to be already estimated.  Unfortunately the code is also complex and no simple fix could be found.  Because of these issues, in the stability metric its own interest points were used instead of the precomputed ones used by every other library.  Speed wise a special test was done for OpenCV where both features were detected and described at the same time, which took 1940 (ms) for 6485 features.  Making it about 20% than OpenSURF's combined time.
<social_buttons />


= Descriptor Stability =
= Descriptor Stability =
Line 52: Line 82:
<center>
<center>
<gallery caption="Images from evaluation data set" heights=150 widths=200 >
<gallery caption="Images from evaluation data set" heights=150 widths=200 >
Image:Performance-descriptor-Graffiti.jpg|Graffiti
Image:Performance-descriptor-Graffiti.jpg|Graffiti Sequence
Image:Performance-descriptor-Boat.jpg|Boat
Image:Performance-descriptor-Boat.jpg|Boat Sequence
Image:Performance-descriptor-Trees.jpg|Trees
Image:Performance-descriptor-Trees.jpg|Trees Sequence
Image:Performance-descriptor-Bricks.jpg|Bricks
Image:Performance-descriptor-Bricks.jpg|Bricks Sequence
</gallery>
</gallery>
</center>
</center>


Tests were performed using standardized test images from [http://www.robots.ox.ac.uk/~vgg/research/affine/], which have known transformations. Because the transformation between images is known this allows the true associations to be known.  Stability was measured based upon the number of correct associations between two images in the dataset.  The testing procedure for each library is summarized below:
The stability benchmark was performed using standardized test images from [http://www.robots.ox.ac.uk/~vgg/research/affine/], which have known transformations. Stability was measured based on the number of correct associations between two images in the data set.  The testing procedure for each library is summarized below:


# For each image, detect features (scale and location) using the fast Hessian detector in BoofCV.
# For each image, detect features (scale and location) using the fast Hessian detector in the reference library.
#* Save results to a file and use the same file for all libraries.  
#* Save results to a file and use the same file for all libraries.  
# For each image, compute a feature description (including orientation) for all found features.
# For each image, compute a feature description (including orientation) for all features found.
# In each image sequence, associate features in the first image to the Nth image, where N > 1.
# In each image sequence, associate features in the first image to the Nth image, where N > 1.
#* Association is done by minimizing Euclidean error
#* Association is done by minimizing Euclidean error.
#* Validation is done using reverse association. E.g. This association must be the optimal association going from frame 1 to N and N to 1.
#* Validation is done using reverse association, i.e. the association must be the optimal association going from frame 1 to N and N to 1.
# Compute the number of correct associations.
# Compute the number of correct associations.
#* An association is correct if it is within 3 pixels of the true location.
#* An association is correct if it is within 3 pixels of the true location.


Since the transformation is known between images the true location could have been used.  However, in reality features will not lie at the exact point and a descriptor needs to be tolerant to this type of errors.  Thus this is a more accurate measure of the description's strength.
Since the transformation is known between images, the true location could have been used.  However, in reality features will not lie at the exact point, and a descriptor needs to be tolerant of this type of error.  Thus, this is a more accurate measure of the descriptor's strength in real-world data.
 
Stability results shown in Figure 2 display the relative stability across all test images for each library.  Relative stability is computed by summing up the total percent of correctly associated features across the whole test data set, and then choosing the library with the best performance. The relative stability is computed by dividing each library's score by the best performer's score.


'''Configuration:''' All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper.  JavaSURF does not support orientation estimation.  OpenCV forces orientation to be estimated inside the feature detector.  Thus it was decided that the lesser evil for OpenCV was to let it detect its own features.  OpenCV's threshold was adjusted so that it detected about the same number of features.
'''Configuration:''' All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper.  JavaSURF does not support orientation estimation.




Line 86: Line 118:
| http://www.boofcv.org/notwiki/images/benchmark_surf/stability_wall.gif || http://www.boofcv.org/notwiki/images/benchmark_surf/stability_bark.gif
| http://www.boofcv.org/notwiki/images/benchmark_surf/stability_wall.gif || http://www.boofcv.org/notwiki/images/benchmark_surf/stability_bark.gif
|}
|}
= Detection Stability =
SURF feature points are typically detected using the fast Hessian detector described in the SURF paper.  Interest point detection stability refers to how well an interest's point location and scale is detect after the image has undergone a transformation.  A perfect detector would detect a point in the same location as it was in the original image after applying the image transform.
Performance was measured based upon the fraction of the total features detected in the first image which had a corresponding interest point detected in the later images.  Two interest points were said to correspond if their location and scales were both within tolerance.  The expected location and scale was computed using the known transformations.  Scale was computed by sampling four evenly spaced points one pixel away from an interest point in the first frame.  The known transform was then applied to each point and the change in distance measured.  The expected scale was set to the average distance each point was from transformed interest point location.
When compiling the results it was noticed that libraries which detected more features always scored better when using the just mentioned metric.  It was also noted that a poorly behaving detector could score highly in the just mentioned metric.  For example if every pixel was marked as an interest point or if densely packed clusters were returned as detected features.  One of the libraries even had a known bug were false positives would be returned near the image border.
To compensate for these issues, only interest points were considered if they were an unambiguous match and the detection threshold was tuned to return the same number of features for at least one image.  A match was declared as ambiguous if more than one interest point was found to be close to the expected interest point location.
Library Configurations:
* Tune detection threshold to detect about 2000 features in graffiti image 1
* Only consider a 3x3 non-max region
* Octaves: 4
* Scales: 4
* Base Size: 9
* Initial Pixel Skip: 1
Performance Calculation:
# Detect interest points in all images
# Transform interest points in image 1 to image N
# For each interest point in image 1:
## Find all interest points in image N within 1.5 pixels and 25% of the expected scale.
## If the expected pixel location is outside the image, ignore.
## If the number of matches is more than one, ignore.
## If the number of matches is one, mark the interest point as a correct detection.
# Count number of valid interest points which have zero or one matches.
# Detection metric for each image in the sequence is the total number of correct detections divided by the number of valid interest points.
The summary chart is generated by summing up the detection metric for each library across all the images and then dividing each library's score by the library with the best score.


= Runtime Speed =
= Runtime Speed =
How fast enough library can compute the description and detect features was also benchmarked.  Each test was performed several times with only the best time being shown.  Java libraries tended to exhibit more variability than native libraries, while all libraries showed a significant amount of variability from trial to trial. 


Only image processing time essential to SURF was measured and not loading in imagesThis would include converting an image to integral image format, but not converting the image to gray scaleAssuming that it was possible to not include the gray scale conversion.  Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.  
Each library's speed in describing and detecting was benchmarked together (Figure 2).  Each test was performed 11 times with the median time being shown.  Java libraries tended to exhibit more variation than native libraries, although all libraries showed a significant amount of variation from trial to trial. 
 
Only image processing time essential to SURF was measured, not image loading timeTime to convert the gray scale image into an integral image was measured, but not the time to convert the image to grayscaleEven if these image processing tasks were included, they would only account for a small fraction of the overall computation time.  Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.  


Testing Procedure:
Testing Procedure for Describe:
# Kill all extraneous processes.  
# Kill all extraneous processes.  
# Load feature location and size from file.
# Load feature location and size from file.
# Compute descriptors (including orientation) for each feature while recording elapsed time.
# Detect interest points and compute descriptors while recording elapsed time.
# Compute elapsed time 10 times and output best result.
# Compute elapsed time 10 times and output best result.
# Run the whole experiment 4 times for each library and record the best time.
# Run the whole experiment 11 times for each library and record the median time.


Test Computer:   
Test Computer:   
Line 106: Line 170:
* Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
* Java(TM) SE Runtime Environment (build 1.6.0_26-b03)


Compiler and JRE Configuration
Compiler and JRE Configuration:
* All native libraries were compiled with -O3
* All native libraries were compiled with -O3
* Java applications were run with no special flags
* Java applications were run with no special flags
** Java has defaulted to "-server" on desktop computers since Java 6, see [http://docs.oracle.com/javase/6/docs/technotes/guides/vm/server-class.html this technote].


Describe Specific Setup:
Algorithmic Setup:
* input image was boat/img1
* input image was boat/img1
* Fast Hessian features from BoofCV
* Impossible to configure libraries to detect exact same features
** 6415 Total
 
Detect Specific Setup:
* Impossible to configure libraries to detect exact same features.
** Adjusted detection threshold to top out at around 2000 features
** Adjusted detection threshold to top out at around 2000 features
* Octaves: 4
* Octaves: 4
Line 123: Line 184:
* Initial Pixel Skip: 1
* Initial Pixel Skip: 1


Results can be found at the top of the page. OpenCV was ommited from runtime results because it could not be configured the same way as the other librariesA special test was performed just for OpenCV and is discussed above.
 
'''Comments:''' Both BoofCV-F and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.
 
= Change History =
 
# February 2, 2012
#* Updated results
#* Changed runtime performance from being the min time to the median time
#* OpenCV no longer detects its own features for descriptor stabilitySlight improvement in results
#* Descriptor stability benchmark uses interest points detected from the reference library now
#* Pan-o-Matic was not being compiled with -O3 before
# December 5, 2011
#* Forked benchmark source code into its own project and updated links
# November 14, 2011
#* Added detect stability results
# November 12, 2011
#* Added Pan-o-Matic to overall results

Latest revision as of 12:39, 25 September 2014

Comparison of SURF implementations

Speeded Up Robust Feature (SURF) is a state-of-the-art image region descriptor and detector that is invariant with regard to scale, orientation, and illumination. By using an integral image, the descriptor can be computed efficiently across different scales. In recent years it has emerged as one of the more popular and frequently-used feature descriptors, but it is not a trivial algorithm to implement, and several different implementations exist. The following study compares several different libraries to determine relative stability and runtime performance.

Results Last Updated: February 2, 2012

For a more detailed discussion of the algorithmic difference between these implementations, see the paper below

Peter Abeles, "Speeding Up SURF" 9th International Symposium on Visual Computing, 2013

@proceedings{abeles2013speeding,
  title={Speeding Up SURF},
  author={Abeles, Peter},
  booktitle={Advances in Visual Computing},
  pages={454--464},
  year={2013},
  publisher={Springer}
}

Tested Implementations:

Implementation Version Language Threaded Comment
BoofCV-F 0.5 Java No http://boofcv.org/
Fast but less accurate. See FactoryDescribeRegionPoint.surf()
BoofCV-M 0.5 Java No http://boofcv.org/
Accurate but slower. See FactoryDescribeRegionPoint.surfm()
OpenSURF 27/05/2010 C++ No http://www.chrisevansdev.com/computer-vision-opensurf.html
Reference 1.0.9 C++ No http://www.vision.ee.ethz.ch/~surf/
JOpenSURF SVN r24 Java No http://code.google.com/p/jopensurf/
JavaSURF SVN r4 Java No http://code.google.com/p/javasurf/
OpenCV 2.3.1 SVN r6879 C++ No [1] http://opencv.willowgarage.com/wiki/
Pan-o-Matic 0.9.4 C++ No http://aorlinsk2.free.fr/panomatic/?p=home

[1] OpenCV can be configured to use multi-threaded code if it is compiled with IPP. Only a single thread was used in this test. Click here for a discussion on this issue.

Benchmark Source Code:

Various Info:

Conclusions

overall_all_speed.gif
Figure 1: Runtime performance comparison for detecting and describing. Single 850x680 pixel image and 2000 features. Lower is better.
overall_describe_stability.gif overall_detect_stability.gif
Figure 2: Overall region descriptor stability comparison. Scores are relative to the best library. Higher is better. Figure 3: Overall stability of interest point detection. Scores are relative to the best library. Higher is better.

For the sake of those with short attention spans, the summary results are posted first and a discussion of testing methodology follows. Figure 1 shows how fast each library could detect and describe interest points. Figures 2 and 3 show a summary of each implementation's relative stability for describing and detecting across a standard set of test images.

For a discussion of these results and an understanding of why different implementations performed better than others, see the pre-print paper above. In summary, libraries with better stability did a better job of addressing the smoothness rule. Major differentiators between libraries involve how the gradient is interpolated during descriptor calculation and how the image border is handled. The reference library runs slowly and the binary is no longer compatible with the latest releases of Linux. If an alternative implementation is used to represent SURF's performance, it is recommended that Pan-o-Matic, or BoofCV be used. Other libraries exhibited significantly worse stability in the descriptor and/or detection calculations. BoofCV is able to achieve its speed despite being a Java library through several small algorithmic changes.

submit to reddit     

Descriptor Stability

The stability benchmark was performed using standardized test images from [1], which have known transformations. Stability was measured based on the number of correct associations between two images in the data set. The testing procedure for each library is summarized below:

  1. For each image, detect features (scale and location) using the fast Hessian detector in the reference library.
    • Save results to a file and use the same file for all libraries.
  2. For each image, compute a feature description (including orientation) for all features found.
  3. In each image sequence, associate features in the first image to the Nth image, where N > 1.
    • Association is done by minimizing Euclidean error.
    • Validation is done using reverse association, i.e. the association must be the optimal association going from frame 1 to N and N to 1.
  4. Compute the number of correct associations.
    • An association is correct if it is within 3 pixels of the true location.

Since the transformation is known between images, the true location could have been used. However, in reality features will not lie at the exact point, and a descriptor needs to be tolerant of this type of error. Thus, this is a more accurate measure of the descriptor's strength in real-world data.

Stability results shown in Figure 2 display the relative stability across all test images for each library. Relative stability is computed by summing up the total percent of correctly associated features across the whole test data set, and then choosing the library with the best performance. The relative stability is computed by dividing each library's score by the best performer's score.

Configuration: All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper. JavaSURF does not support orientation estimation.


Stability Results

stability_bike.gif stability_boat.gif
stability_graf.gif] stability_leuven.gif
stability_ubc.gif stability_trees.gif
stability_wall.gif stability_bark.gif

Detection Stability

SURF feature points are typically detected using the fast Hessian detector described in the SURF paper. Interest point detection stability refers to how well an interest's point location and scale is detect after the image has undergone a transformation. A perfect detector would detect a point in the same location as it was in the original image after applying the image transform.

Performance was measured based upon the fraction of the total features detected in the first image which had a corresponding interest point detected in the later images. Two interest points were said to correspond if their location and scales were both within tolerance. The expected location and scale was computed using the known transformations. Scale was computed by sampling four evenly spaced points one pixel away from an interest point in the first frame. The known transform was then applied to each point and the change in distance measured. The expected scale was set to the average distance each point was from transformed interest point location.

When compiling the results it was noticed that libraries which detected more features always scored better when using the just mentioned metric. It was also noted that a poorly behaving detector could score highly in the just mentioned metric. For example if every pixel was marked as an interest point or if densely packed clusters were returned as detected features. One of the libraries even had a known bug were false positives would be returned near the image border.

To compensate for these issues, only interest points were considered if they were an unambiguous match and the detection threshold was tuned to return the same number of features for at least one image. A match was declared as ambiguous if more than one interest point was found to be close to the expected interest point location.

Library Configurations:

  • Tune detection threshold to detect about 2000 features in graffiti image 1
  • Only consider a 3x3 non-max region
  • Octaves: 4
  • Scales: 4
  • Base Size: 9
  • Initial Pixel Skip: 1

Performance Calculation:

  1. Detect interest points in all images
  2. Transform interest points in image 1 to image N
  3. For each interest point in image 1:
    1. Find all interest points in image N within 1.5 pixels and 25% of the expected scale.
    2. If the expected pixel location is outside the image, ignore.
    3. If the number of matches is more than one, ignore.
    4. If the number of matches is one, mark the interest point as a correct detection.
  4. Count number of valid interest points which have zero or one matches.
  5. Detection metric for each image in the sequence is the total number of correct detections divided by the number of valid interest points.

The summary chart is generated by summing up the detection metric for each library across all the images and then dividing each library's score by the library with the best score.

Runtime Speed

Each library's speed in describing and detecting was benchmarked together (Figure 2). Each test was performed 11 times with the median time being shown. Java libraries tended to exhibit more variation than native libraries, although all libraries showed a significant amount of variation from trial to trial.

Only image processing time essential to SURF was measured, not image loading time. Time to convert the gray scale image into an integral image was measured, but not the time to convert the image to grayscale. Even if these image processing tasks were included, they would only account for a small fraction of the overall computation time. Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.

Testing Procedure for Describe:

  1. Kill all extraneous processes.
  2. Load feature location and size from file.
  3. Detect interest points and compute descriptors while recording elapsed time.
  4. Compute elapsed time 10 times and output best result.
  5. Run the whole experiment 11 times for each library and record the median time.

Test Computer:

  • Ubuntu 10.10 64bit
  • Quadcore Q6600 2.4 GHz
  • Memory 8194 GB
  • g++ 4.4.5
  • Java(TM) SE Runtime Environment (build 1.6.0_26-b03)

Compiler and JRE Configuration:

  • All native libraries were compiled with -O3
  • Java applications were run with no special flags
    • Java has defaulted to "-server" on desktop computers since Java 6, see this technote.

Algorithmic Setup:

  • input image was boat/img1
  • Impossible to configure libraries to detect exact same features
    • Adjusted detection threshold to top out at around 2000 features
  • Octaves: 4
  • Scales: 4
  • Base Size: 9
  • Initial Pixel Skip: 1


Comments: Both BoofCV-F and BoofCV-M use the same detector which is why only BoofCV is listed in the detector results.

Change History

  1. February 2, 2012
    • Updated results
    • Changed runtime performance from being the min time to the median time
    • OpenCV no longer detects its own features for descriptor stability. Slight improvement in results
    • Descriptor stability benchmark uses interest points detected from the reference library now
    • Pan-o-Matic was not being compiled with -O3 before
  2. December 5, 2011
    • Forked benchmark source code into its own project and updated links
  3. November 14, 2011
    • Added detect stability results
  4. November 12, 2011
    • Added Pan-o-Matic to overall results