Difference between revisions of "Example Image Stitching"

From BoofCV
Jump to navigationJump to search
m (Created page with "= Image Stitching Tutorial = <center> <gallery caption="Images stitched together using example code" heights=150 widths=200 > Image:Stitch_kayaking.jpg Image:Stitch_mountain.jpg...")
 
m
Line 9: Line 9:
</center>
</center>


Image stitching refers to combining two or more overlapping images together into a single large image.  The goal is to find transforms which minimize the error in overlapping regions and provide a smooth transition between images. There are many different ways in which image stitching can be done, what is discussed here are point based methods.
Image stitching refers to combining two or more overlapping images together into a single large image.  The goal is to find transforms which minimize the error in overlapping regions and provide a smooth transition between images. There are many different ways in which image stitching can be done, what is discussed here are point based methods.  To understand this tutorial a basic understanding of point features and model fitting is required.
 
BoofCV provides several different ways to identify and describe interest points inside of imagesIn this tutorial it will be shown how abstracted code can be used to switch between different algorithms.  
 


Example File: [https://github.com/lessthanoptimal/BoofCV/blob/master/examples/src/boofcv/examples/ImageStitchingExample.java ImageStitchingExample.java]
Example File: [https://github.com/lessthanoptimal/BoofCV/blob/master/examples/src/boofcv/examples/ImageStitchingExample.java ImageStitchingExample.java]
Line 24: Line 21:


Relevant Applets:
Relevant Applets:
* [[Applet_Binary_Operations| Binary Operations]]
* [[Applet_Associate_Points| Associate Points]]
* [[Applet_Binary_Segmentation| Binary Segmentation]]
* [[Applet_Description_Region| Region Description]]


= Core Algorithm =
= Algorithm Introduction =


Described at a high level this image stitching algorithm can be summarized as follows:
Described at a high level this image stitching algorithm can be summarized as follows:
Line 34: Line 31:
# Compute feature descriptors
# Compute feature descriptors
# Associate features together
# Associate features together
# Use robust fitting to find transform
# Robust fitting to find transform
# Render combined image
# Render combined image
The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily.  Output examples are shown at the top of this page.
= Abstracted Code =
The code algorithm summarized in the previous section is now shown in code.  Java generics are used to abstract away the input image type as well as the type of models being used to describe the image motion.  Checks are done to make sure compatible detector and describes have been provided.


<pre>
<pre>
Line 91: Line 94:
}
}
</pre>
</pre>
Each image is described as a set of detected interest points and feature descriptions for those interest points.  In this function points are detected and then descriptions extracted.


<pre>
<pre>
Line 124: Line 129:
</pre>
</pre>


==Create Component Algorithms =
= Declaration of Specific Algorithms =
 
In the previous section abstracted code was used to detect and associate features.  In this function the specific algorithms which are passed in are defined.  This is also where one could change the type of descriptor used to see how that affects performance.
 
A [http://en.wikipedia.org/wiki/Homography homography] is used to describe the transform between the images.  Homographies assume that all features lie on a plane.  While this might sound overly restrictive it is a good model when dealing with objects that are far away or when rotating.
 


<pre>
<pre>

Revision as of 14:13, 26 October 2011

Image Stitching Tutorial

Image stitching refers to combining two or more overlapping images together into a single large image. The goal is to find transforms which minimize the error in overlapping regions and provide a smooth transition between images. There are many different ways in which image stitching can be done, what is discussed here are point based methods. To understand this tutorial a basic understanding of point features and model fitting is required.

Example File: ImageStitchingExample.java

Concepts:

  • Interest point detection
  • Region descriptions
  • Feature association
  • Robust model fitting
  • Homography

Relevant Applets:

Algorithm Introduction

Described at a high level this image stitching algorithm can be summarized as follows:

  1. Detect feature locations
  2. Compute feature descriptors
  3. Associate features together
  4. Robust fitting to find transform
  5. Render combined image

The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily. Output examples are shown at the top of this page.

Abstracted Code

The code algorithm summarized in the previous section is now shown in code. Java generics are used to abstract away the input image type as well as the type of models being used to describe the image motion. Checks are done to make sure compatible detector and describes have been provided.

	/**
	 * Using abstracted code, find a transform which minimises the difference between corresponding features
	 * in both images.  This code is completely model independent and is the core algorithms.
	 */
	public static<T extends ImageBase> Homography2D_F64
	computeTransform( T imageA , T imageB ,
			  InterestPointDetector<T> detector ,
			  DescribeRegionPoint<T> describe ,
			  GeneralAssociation<TupleDesc_F64> associate ,
			  ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher )
	{
		// see if the detector has everything that the describer needs
		if( describe.requiresOrientation() && !detector.hasOrientation() )
			throw new IllegalArgumentException("Requires orientation be provided.");
		if( describe.requiresScale() && !detector.hasScale() )
			throw new IllegalArgumentException("Requires scale be provided.");

		// get the length of the description
		int descriptionDOF = describe.getDescriptionLength();

		List<Point2D_F64> pointsA = new ArrayList<Point2D_F64>();
		FastQueue<TupleDesc_F64> descA = new TupleDescQueue(descriptionDOF,true);
		List<Point2D_F64> pointsB = new ArrayList<Point2D_F64>();
		FastQueue<TupleDesc_F64> descB = new TupleDescQueue(descriptionDOF,true);

		// extract feature locations and descriptions from each image
		describeImage(imageA, detector, describe, pointsA, descA);
		describeImage(imageB, detector, describe, pointsB, descB);

		// Associate features between the two images
		associate.associate(descA,descB);

		// create a list of AssociatedPairs that tell the model matcher how a feature moved
		FastQueue<AssociatedIndex> matches = associate.getMatches();
		List<AssociatedPair> pairs = new ArrayList<AssociatedPair>();

		for( int i = 0; i < matches.size(); i++ ) {
			AssociatedIndex match = matches.get(i);

			Point2D_F64 a = pointsA.get(match.src);
			Point2D_F64 b = pointsB.get(match.dst);

			pairs.add( new AssociatedPair(a,b,false));
		}

		// find the best fit model to describe the change between these images
		if( !modelMatcher.process(pairs,null) )
			throw new RuntimeException("Model Matcher failed!");

		// return the found image transform
		return modelMatcher.getModel();
	}

Each image is described as a set of detected interest points and feature descriptions for those interest points. In this function points are detected and then descriptions extracted.

	/**
	 * Detects features inside the two images and computes descriptions at those points.
	 */
	private static <T extends ImageBase> void describeImage(T image,
								InterestPointDetector<T> detector,
								DescribeRegionPoint<T> describe,
								List<Point2D_F64> points,
								FastQueue<TupleDesc_F64> descs) 
	{
		detector.detect(image);
		describe.setImage(image);

		descs.reset();
		TupleDesc_F64 desc = descs.pop();
		for( int i = 0; i < detector.getNumberOfFeatures(); i++ ) {
			// get the feature location info
			Point2D_F64 p = detector.getLocation(i);
			double yaw = detector.getOrientation(i);
			double scale = detector.getScale(i);

			// extract the description and save the results into the provided description
			if( describe.process(p.x,p.y,yaw,scale,desc) != null ) {
				points.add(p.copy());
				desc = descs.pop();
			}
		}
		// remove the last element from the queue, which has not been used.
		descs.removeTail();
	}

Declaration of Specific Algorithms

In the previous section abstracted code was used to detect and associate features. In this function the specific algorithms which are passed in are defined. This is also where one could change the type of descriptor used to see how that affects performance.

A homography is used to describe the transform between the images. Homographies assume that all features lie on a plane. While this might sound overly restrictive it is a good model when dealing with objects that are far away or when rotating.


	/**
	 * Given two input images create and display an image where the two have been overlayed on top of each other.
	 */
	public static <T extends ImageBase> void stitch( BufferedImage imageA , BufferedImage imageB ,
							 Class<T> imageType )
	{
		T inputA = ConvertBufferedImage.convertFrom(imageA , null, imageType);
		T inputB = ConvertBufferedImage.convertFrom(imageB, null, imageType);

		// Detect using the standard SURF feature descriptor and describer
		InterestPointDetector<T> detector = FactoryInterestPoint.fromFastHessian(400,9,4,4);
		DescribeRegionPoint<T> describe = FactoryDescribeRegionPoint.surf(true,imageType);
		GeneralAssociation<TupleDesc_F64> associate = FactoryAssociation.greedy(new ScoreAssociateEuclideanSq(),2,-1,true);

		// fit the images using a homography.  This works well for rotations and distant objects.
		ModelFitterLinearHomography modelFitter = new ModelFitterLinearHomography();
		DistanceHomographySq distance = new DistanceHomographySq();
		int minSamples = modelFitter.getMinimumPoints();
		ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher = new SimpleInlierRansac<Homography2D_F64,AssociatedPair>(123,modelFitter,distance,60,minSamples,30,1000,9);

		Homography2D_F64 H = computeTransform(inputA, inputB, detector, describe, associate, modelMatcher);

		// draw the results
		HomographyStitchPanel panel = new HomographyStitchPanel(0.5,inputA.width,inputA.height);
		panel.configure(imageA,imageB,H);
		ShowImages.showWindow(panel,"Stitched Images");
	}