Example Detect Describe Interface
From BoofCV
Jump to navigationJump to search
BoofCV provides multiple ways to detect and describe interest points inside of images. The easiest high level interface to work with is DetectDescribePoint. It will detect and describe all interest points in the image at the same time. The alternative involves using separate interfaces for detection, orientation, and describing.
Example Code:
Concepts:
- Interest point detection
- Local region descriptors
Relevant Applets:
Related Examples
Example Code
/**
* {@link DetectDescribePoint} provides a single unified interface for detecting interest points inside of images
* and describing the features. For some features (e.g. SIFT) it can be much faster than the alternative approach
* where individual algorithms are used for feature detection, orientation estimation, and describe. It also
* simplifies the code.
*
* @author Peter Abeles
*/
public class ExampleDetectDescribe {
/**
* For some features, there are pre-made implementations of DetectDescribePoint. This has only been done
* in situations where there was a performance advantage or that it was a very common combination.
*/
public static <T extends ImageSingleBand, D extends TupleDesc>
DetectDescribePoint<T,D> createFromPremade( Class<T> imageType ) {
return (DetectDescribePoint)FactoryDetectDescribe.surf(1, 2, 200, 1, 9, 4, 4,true,imageType);
// note that SIFT only supports ImageFloat32
// if( imageType == ImageFloat32.class )
// return (DetectDescribePoint)FactoryDetectDescribe.sift(4,2,false,-1);
// else
// throw new RuntimeException("Unsupported image type");
}
/**
* Any arbitrary implementation of InterestPointDetector, OrientationImage, DescribeRegionPoint
* can be combined into DetectDescribePoint. The syntax is more complex, but the end result is more flexible.
* This should only be done if there isn't a pre-made DetectDescribePoint.
*/
public static <T extends ImageSingleBand, D extends ImageSingleBand, TD extends TupleDesc>
DetectDescribePoint<T, TD> createFromComponents( Class<T> imageType ) {
// create a corner detector
Class<D> derivType = GImageDerivativeOps.getDerivativeType(imageType);
GeneralFeatureDetector<T,D> corner = FactoryDetectPoint.createShiTomasi(2, false, 1, 300, derivType);
// describe points using BRIEF
DescribeRegionPoint describe = FactoryDescribeRegionPoint.brief(16, 512, -1, 4, true, imageType);
// Combine together.
// NOTE: orientation will not be estimated
return FactoryDetectDescribe.fuseTogether(corner, null, describe,1,imageType,derivType);
}
public static void main( String args[] ) {
// select which feature to use
DetectDescribePoint<ImageFloat32,?> detDesc = createFromPremade(ImageFloat32.class);
// DetectDescribePoint<ImageFloat32,?> detDesc = createFromComponents(ImageFloat32.class);
// Load an image
BufferedImage buffered = UtilImageIO.loadImage("../data/evaluation/outdoors01.jpg");
ImageFloat32 input = ConvertBufferedImage.convertFrom(buffered,(ImageFloat32)null);
// detect features inside the image
detDesc.detect(input);
// print how out many were found
System.out.println("Found "+detDesc.getNumberOfFeatures());
// print out info for the first feature
System.out.println("Properties of first feature:");
System.out.println("Location: "+detDesc.getLocation(0));
System.out.println("Scale: "+detDesc.getScale(0));
System.out.println("Orientation: "+detDesc.getOrientation(0));
System.out.println("Descriptor: "+detDesc.getDescriptor(0));
}
}