Difference between revisions of "Performance:SURF"
m |
|||
Line 38: | Line 38: | ||
|} | |} | ||
Overall performance for each library is summarized in the plots above. Stability performance was computed by computing the sum of all correct associations through out the entire image data set. Each library was then divided by the best performing library to create a relative plot. See sections below for | Overall performance for each library is summarized in the plots above. Stability performance was computed by computing the sum of all correct associations through out the entire image data set. Each library was then divided by the best performing library to create a relative plot. See sections below for testing procedures. | ||
= Descriptor Stability = | = Descriptor Stability = |
Revision as of 20:24, 26 October 2011
SURF Performance in BoofCV
The SURF descriptor is a state of the art image region descriptor that is scale, orientation, and illumination invariant. By using an integral image it can be computed efficiently across different scales. Inside of BoofCV SURF can be configured many different ways to create several variants. To ensure correctness and optimal performance a study has been performed comparing its performance against other open source libraries as well as itself.
It is shown that BoofCV provides two high quality variants of SURF that are comparable to or better than other popular SURF implementations in C/C++ or Java.
Tested Implementations:
Implementation | Version | Language | Comment |
---|---|---|---|
BoofCV: SURF | 10/2011 | Java | Fast but less accurate. See FactoryDescribeRegionPoint.surf() |
BoofCV: MSURF | 10/2011 | Java | Accurate but slower. See FactoryDescribeRegionPoint.msurf() |
OpenSURF | 27/05/2010 | C++ | http://www.chrisevansdev.com/computer-vision-opensurf.html |
Reference | 1.0.9 | C++ | http://www.vision.ee.ethz.ch/~surf/ |
JOpenSURF | SVN r24 | Java | http://code.google.com/p/jopensurf/ |
JavaSURF | SVN r4 | Java | http://code.google.com/p/javasurf/ |
Benchmark Source Code:
Summary Results
Higher is better. | Lower is better. |
---|
Overall performance for each library is summarized in the plots above. Stability performance was computed by computing the sum of all correct associations through out the entire image data set. Each library was then divided by the best performing library to create a relative plot. See sections below for testing procedures.
Descriptor Stability
Tests were performed using standardized test images from [1], which have known transformations. Because the transformation between images is known this allows the true associations to be known. Stability was measured based upon the number of correct associations between two images in the dataset. The testing procedure is summarized below:
- For each image, detect features (scale and location) using the fast Hessian detector in BoofCV.
- For each image, compute a feature description for all found features.
- In each image sequence, associate features in the first image to the Nth image, where N > 1.
- Association is done by minimizing Euclidean error
- Validation is done using reverse association. E.g. This association must be the optimal association going from frame 1 to N and N to 1.
- Compute the number of correct associations.
- An association is correct if it is within 3 pixels of the true location.
Since the transformation is known between images the true location could have been used. However, in reality features will not lie at the exact point and a descriptor needs to be tolerant to this type of errors. Thus this is a more accurate measure of the description's strength.
Configuration: All libraries were configured to describe oriented SURF-64 features as defined in the original SURF paper. JavaSURF does not support orientation estimation.
Stability Results
] | |
Runtime Speed
How fast enough library can compute the description and detect features was also benchmarked. Each test was performed several times with only the best time being shown. Java libraries tended to exhibit more variability than native libraries, while all libraries showed a significant amount of variability from trial to trial. Elapsed time was measured in the actual application using System.currentTimeMillis() in Java and clock() in C++.
Testing Procedure:
- Kill all extraneous processes.
- Load feature location and size from file.
- Compute descriptors for each feature while recording elapsed time.
- Compute elapsed time 10 times and output best result.
- Run the whole experiment 4 times for each library and record the best time.
Test Computer:
- Ubuntu 10.10 64bit
- Quadcore Q6600 2.4 GHz
- Memory 8194 GB
- g++ 4.4.5
- Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Test Image:
- boat/img1
- Fast Hessian features from BoofCV
- 6415 Total
Compiler and JRE Configuration
- All native libraries were compiled with -O3
- Java applications were run with no special flags
Results can be found at the top of the page.