Difference between revisions of "Example Image Stitching"

From BoofCV
Jump to navigationJump to search
m
m
 
(30 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Image Stitching Example =
<center>
<center>
<gallery caption="Images stitched together using example code" heights=250 widths=250 >
<gallery caption="Images stitched together using example code" heights=250 widths=250 >
Line 9: Line 7:
</center>
</center>


Image stitching refers to combining two or more overlapping images together into a single large image.  The goal is to find transforms which minimize the error in overlapping regions and provide a smooth transition between images. There are many different ways in which image stitching can be done, what is discussed here are point based methods.  To understand this tutorial a basic understanding of point features and model fitting is required.
This example is designed to demonstrate several aspects of BoofCV by stitching images together.  Image stitching refers to combining two or more overlapping images together into a single large image.  When stitching images together the goal is to find a 2D geometric transform which minimize the error (difference in appearance) in overlapping regions. There are many ways to do this, in the example below point image features are found, associated, and then a 2D transform is found robustly using the associated features.


Example File: [https://github.com/lessthanoptimal/BoofCV/blob/master/examples/src/boofcv/examples/ImageStitchingExample.java ImageStitchingExample.java]
Example File: [https://github.com/lessthanoptimal/BoofCV/blob/v0.40/examples/src/main/java/boofcv/examples/geometry/ExampleImageStitching.java ExampleImageStitching.java]


Concepts:
Concepts:
Line 20: Line 18:
* Homography
* Homography


Relevant Applets:
Relevant Videos:
* [[Applet_Associate_Points| Associate Points]]
* [http://www.youtube.com/watch?v=59RJeLlDAxQ YouTube Video]
* [[Applet_Description_Region| Region Description]]
 
 
Related Examples:
* [[Example_Associate_Interest_Points| Associate Interest Points]]
* [[Example_Video_Mosaic| Video Mosaic]]


= Algorithm Introduction =
= Algorithm Introduction =
Line 28: Line 30:
Described at a high level this image stitching algorithm can be summarized as follows:
Described at a high level this image stitching algorithm can be summarized as follows:


# Detect feature locations
# Detect and describe point features
# Compute feature descriptors
# Associate features together
# Associate features together
# Robust fitting to find transform
# Robust fitting to find transform
Line 36: Line 37:
The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily.  Output examples are shown at the top of this page.
The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily.  Output examples are shown at the top of this page.


= Abstracted Code =
= Example Code =
 
The code algorithm summarized in the previous section is now shown in code.  Java generics are used to abstract away the input image type as well as the type of models being used to describe the image motion.  Checks are done to make sure compatible detector and describes have been provided.
 
<syntaxhighlight lang="java">
<syntaxhighlight lang="java">
public class ExampleImageStitching {
/**
/**
* Using abstracted code, find a transform which minimises the difference between corresponding features
* Using abstracted code, find a transform which minimizes the difference between corresponding features
* in both images. This code is completely model independent and is the core algorithms.
* in both images. This code is completely model independent and is the core algorithms.
*/
*/
public static<T extends ImageBase> Homography2D_F64
public static <T extends ImageGray<T>, TD extends TupleDesc<TD>> Homography2D_F64
computeTransform( T imageA , T imageB ,
computeTransform( T imageA, T imageB,
  InterestPointDetector<T> detector ,
  DetectDescribePoint<T, TD> detDesc,
  DescribeRegionPoint<T> describe ,
  AssociateDescription<TD> associate,
  GeneralAssociation<TupleDesc_F64> associate ,
  ModelMatcher<Homography2D_F64, AssociatedPair> modelMatcher ) {
  ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher )
{
// see if the detector has everything that the describer needs
if( describe.requiresOrientation() && !detector.hasOrientation() )
throw new IllegalArgumentException("Requires orientation be provided.");
if( describe.requiresScale() && !detector.hasScale() )
throw new IllegalArgumentException("Requires scale be provided.");
 
// get the length of the description
// get the length of the description
int descriptionDOF = describe.getDescriptionLength();
List<Point2D_F64> pointsA = new ArrayList<>();
 
DogArray<TD> descA = UtilFeature.createArray(detDesc, 100);
List<Point2D_F64> pointsA = new ArrayList<Point2D_F64>();
List<Point2D_F64> pointsB = new ArrayList<>();
FastQueue<TupleDesc_F64> descA = new TupleDescQueue(descriptionDOF,true);
DogArray<TD> descB = UtilFeature.createArray(detDesc, 100);
List<Point2D_F64> pointsB = new ArrayList<Point2D_F64>();
FastQueue<TupleDesc_F64> descB = new TupleDescQueue(descriptionDOF,true);


// extract feature locations and descriptions from each image
// extract feature locations and descriptions from each image
describeImage(imageA, detector, describe, pointsA, descA);
describeImage(imageA, detDesc, pointsA, descA);
describeImage(imageB, detector, describe, pointsB, descB);
describeImage(imageB, detDesc, pointsB, descB);


// Associate features between the two images
// Associate features between the two images
associate.associate(descA,descB);
associate.setSource(descA);
associate.setDestination(descB);
associate.associate();


// create a list of AssociatedPairs that tell the model matcher how a feature moved
// create a list of AssociatedPairs that tell the model matcher how a feature moved
FastQueue<AssociatedIndex> matches = associate.getMatches();
FastAccess<AssociatedIndex> matches = associate.getMatches();
List<AssociatedPair> pairs = new ArrayList<AssociatedPair>();
List<AssociatedPair> pairs = new ArrayList<>();


for( int i = 0; i < matches.size(); i++ ) {
for (int i = 0; i < matches.size(); i++) {
AssociatedIndex match = matches.get(i);
AssociatedIndex match = matches.get(i);


Line 83: Line 74:
Point2D_F64 b = pointsB.get(match.dst);
Point2D_F64 b = pointsB.get(match.dst);


pairs.add( new AssociatedPair(a,b,false));
pairs.add(new AssociatedPair(a, b, false));
}
}


// find the best fit model to describe the change between these images
// find the best fit model to describe the change between these images
if( !modelMatcher.process(pairs,null) )
if (!modelMatcher.process(pairs))
throw new RuntimeException("Model Matcher failed!");
throw new RuntimeException("Model Matcher failed!");


// return the found image transform
// return the found image transform
return modelMatcher.getModel();
return modelMatcher.getModelParameters().copy();
}
}
</syntaxhighlight>
Each image is described as a set of detected interest points and feature descriptions for those interest points.  In this function points are detected and then descriptions extracted.


<syntaxhighlight lang="java">
/**
/**
* Detects features inside the two images and computes descriptions at those points.
* Detects features inside the two images and computes descriptions at those points.
*/
*/
private static <T extends ImageBase> void describeImage(T image,
private static <T extends ImageGray<T>, TD extends TupleDesc<TD>>
InterestPointDetector<T> detector,
void describeImage( T image,
DescribeRegionPoint<T> describe,
DetectDescribePoint<T, TD> detDesc,
List<Point2D_F64> points,
List<Point2D_F64> points,
FastQueue<TupleDesc_F64> descs)  
DogArray<TD> listDescs ) {
{
detDesc.detect(image);
detector.detect(image);
describe.setImage(image);
 
descs.reset();
TupleDesc_F64 desc = descs.pop();
for( int i = 0; i < detector.getNumberOfFeatures(); i++ ) {
// get the feature location info
Point2D_F64 p = detector.getLocation(i);
double yaw = detector.getOrientation(i);
double scale = detector.getScale(i);


// extract the description and save the results into the provided description
listDescs.reset();
if( describe.process(p.x,p.y,yaw,scale,desc) != null ) {
for (int i = 0; i < detDesc.getNumberOfFeatures(); i++) {
points.add(p.copy());
points.add(detDesc.getLocation(i).copy());
desc = descs.pop();
listDescs.grow().setTo(detDesc.getDescription(i));
}
}
}
// remove the last element from the queue, which has not been used.
descs.removeTail();
}
}
</syntaxhighlight>


= Declaration of Specific Algorithms =
/**
* Given two input images create and display an image where the two have been overlayed on top of each other.
*/
public static <T extends ImageGray<T>>
void stitch( BufferedImage imageA, BufferedImage imageB, Class<T> imageType ) {
T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);


In the previous section abstracted code was used to detect and associate features. In this function the specific algorithms which are passed in are defined. This is also where one could change the type of descriptor used to see how that affects performance.
// Detect using the standard SURF feature descriptor and describer
DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null, null, imageType);
ScoreAssociation<TupleDesc_F64> scorer = FactoryAssociation.scoreEuclidean(TupleDesc_F64.class, true);
AssociateDescription<TupleDesc_F64> associate = FactoryAssociation.greedy(new ConfigAssociateGreedy(true, 2), scorer);


A [http://en.wikipedia.org/wiki/Homography homography] is used to describe the transform between the images.  Homographies assume that all features lie on a plane. While this might sound overly restrictive it is a good model when dealing with objects that are far away or when rotating. While a homography should be a good fit for these images, this model does not take in account lens distortion and other physical affects which introduces some artifacts.
// fit the images using a homography. This works well for rotations and distant objects.
ModelMatcher<Homography2D_F64, AssociatedPair> modelMatcher =
FactoryMultiViewRobust.homographyRansac(null, new ConfigRansac(60, 3));


Homography2D_F64 H = computeTransform(inputA, inputB, detDesc, associate, modelMatcher);
renderStitching(imageA, imageB, H);
}


<syntaxhighlight lang="java">
/**
/**
* Given two input images create and display an image where the two have been overlayed on top of each other.
* Renders and displays the stitched together images
*/
*/
public static <T extends ImageBase> void stitch( BufferedImage imageA , BufferedImage imageB ,
public static void renderStitching( BufferedImage imageA, BufferedImage imageB,
Class<T> imageType )
Homography2D_F64 fromAtoB ) {
{
// specify size of output image
T inputA = ConvertBufferedImage.convertFrom(imageA , null, imageType);
double scale = 0.5;
T inputB = ConvertBufferedImage.convertFrom(imageB, null, imageType);
 
// Convert into a BoofCV color format
Planar<GrayF32> colorA =
ConvertBufferedImage.convertFromPlanar(imageA, null, true, GrayF32.class);
Planar<GrayF32> colorB =
ConvertBufferedImage.convertFromPlanar(imageB, null, true, GrayF32.class);
 
// Where the output images are rendered into
Planar<GrayF32> work = colorA.createSameShape();
 
// Adjust the transform so that the whole image can appear inside of it
Homography2D_F64 fromAToWork = new Homography2D_F64(scale, 0, colorA.width/4, 0, scale, colorA.height/4, 0, 0, 1);
Homography2D_F64 fromWorkToA = fromAToWork.invert(null);
 
// Used to render the results onto an image
PixelTransformHomography_F32 model = new PixelTransformHomography_F32();
InterpolatePixelS<GrayF32> interp = FactoryInterpolation.bilinearPixelS(GrayF32.class, BorderType.ZERO);
ImageDistort<Planar<GrayF32>, Planar<GrayF32>> distort =
DistortSupport.createDistortPL(GrayF32.class, model, interp, false);
distort.setRenderAll(false);


// Detect using the standard SURF feature descriptor and describer
// Render first image
InterestPointDetector<T> detector = FactoryInterestPoint.fromFastHessian(400,9,4,4);
model.setTo(fromWorkToA);
DescribeRegionPoint<T> describe = FactoryDescribeRegionPoint.surf(true,imageType);
distort.apply(colorA, work);
GeneralAssociation<TupleDesc_F64> associate = FactoryAssociation.greedy(new ScoreAssociateEuclideanSq(),2,-1,true);
 
// Render second image
Homography2D_F64 fromWorkToB = fromWorkToA.concat(fromAtoB, null);
model.setTo(fromWorkToB);
distort.apply(colorB, work);
 
// Convert the rendered image into a BufferedImage
BufferedImage output = new BufferedImage(work.width, work.height, imageA.getType());
ConvertBufferedImage.convertTo(work, output, true);
 
Graphics2D g2 = output.createGraphics();
 
// draw lines around the distorted image to make it easier to see
Homography2D_F64 fromBtoWork = fromWorkToB.invert(null);
Point2D_I32 corners[] = new Point2D_I32[4];
corners[0] = renderPoint(0, 0, fromBtoWork);
corners[1] = renderPoint(colorB.width, 0, fromBtoWork);
corners[2] = renderPoint(colorB.width, colorB.height, fromBtoWork);
corners[3] = renderPoint(0, colorB.height, fromBtoWork);
 
g2.setColor(Color.ORANGE);
g2.setStroke(new BasicStroke(4));
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g2.drawLine(corners[0].x, corners[0].y, corners[1].x, corners[1].y);
g2.drawLine(corners[1].x, corners[1].y, corners[2].x, corners[2].y);
g2.drawLine(corners[2].x, corners[2].y, corners[3].x, corners[3].y);
g2.drawLine(corners[3].x, corners[3].y, corners[0].x, corners[0].y);


// fit the images using a homography. This works well for rotations and distant objects.
ShowImages.showWindow(output, "Stitched Images", true);
ModelFitterLinearHomography modelFitter = new ModelFitterLinearHomography();
}
DistanceHomographySq distance = new DistanceHomographySq();
int minSamples = modelFitter.getMinimumPoints();
ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher = new SimpleInlierRansac<Homography2D_F64,AssociatedPair>(123,modelFitter,distance,60,minSamples,30,1000,9);


Homography2D_F64 H = computeTransform(inputA, inputB, detector, describe, associate, modelMatcher);
private static Point2D_I32 renderPoint( int x0, int y0, Homography2D_F64 fromBtoWork ) {
Point2D_F64 result = new Point2D_F64();
HomographyPointOps_F64.transform(fromBtoWork, new Point2D_F64(x0, y0), result);
return new Point2D_I32((int)result.x, (int)result.y);
}


// draw the results
public static void main( String[] args ) {
HomographyStitchPanel panel = new HomographyStitchPanel(0.5,inputA.width,inputA.height);
BufferedImage imageA, imageB;
panel.configure(imageA,imageB,H);
imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/mountain_rotate_01.jpg"));
ShowImages.showWindow(panel,"Stitched Images");
imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/mountain_rotate_03.jpg"));
stitch(imageA, imageB, GrayF32.class);
imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/kayak_01.jpg"));
imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/kayak_03.jpg"));
stitch(imageA, imageB, GrayF32.class);
imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("scale/rainforest_01.jpg"));
imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("scale/rainforest_02.jpg"));
stitch(imageA, imageB, GrayF32.class);
}
}
}
</syntaxhighlight>
</syntaxhighlight>

Latest revision as of 14:54, 17 January 2022

This example is designed to demonstrate several aspects of BoofCV by stitching images together. Image stitching refers to combining two or more overlapping images together into a single large image. When stitching images together the goal is to find a 2D geometric transform which minimize the error (difference in appearance) in overlapping regions. There are many ways to do this, in the example below point image features are found, associated, and then a 2D transform is found robustly using the associated features.

Example File: ExampleImageStitching.java

Concepts:

  • Interest point detection
  • Region descriptions
  • Feature association
  • Robust model fitting
  • Homography

Relevant Videos:


Related Examples:

Algorithm Introduction

Described at a high level this image stitching algorithm can be summarized as follows:

  1. Detect and describe point features
  2. Associate features together
  3. Robust fitting to find transform
  4. Render combined image

The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily. Output examples are shown at the top of this page.

Example Code

public class ExampleImageStitching {
	/**
	 * Using abstracted code, find a transform which minimizes the difference between corresponding features
	 * in both images. This code is completely model independent and is the core algorithms.
	 */
	public static <T extends ImageGray<T>, TD extends TupleDesc<TD>> Homography2D_F64
	computeTransform( T imageA, T imageB,
					  DetectDescribePoint<T, TD> detDesc,
					  AssociateDescription<TD> associate,
					  ModelMatcher<Homography2D_F64, AssociatedPair> modelMatcher ) {
		// get the length of the description
		List<Point2D_F64> pointsA = new ArrayList<>();
		DogArray<TD> descA = UtilFeature.createArray(detDesc, 100);
		List<Point2D_F64> pointsB = new ArrayList<>();
		DogArray<TD> descB = UtilFeature.createArray(detDesc, 100);

		// extract feature locations and descriptions from each image
		describeImage(imageA, detDesc, pointsA, descA);
		describeImage(imageB, detDesc, pointsB, descB);

		// Associate features between the two images
		associate.setSource(descA);
		associate.setDestination(descB);
		associate.associate();

		// create a list of AssociatedPairs that tell the model matcher how a feature moved
		FastAccess<AssociatedIndex> matches = associate.getMatches();
		List<AssociatedPair> pairs = new ArrayList<>();

		for (int i = 0; i < matches.size(); i++) {
			AssociatedIndex match = matches.get(i);

			Point2D_F64 a = pointsA.get(match.src);
			Point2D_F64 b = pointsB.get(match.dst);

			pairs.add(new AssociatedPair(a, b, false));
		}

		// find the best fit model to describe the change between these images
		if (!modelMatcher.process(pairs))
			throw new RuntimeException("Model Matcher failed!");

		// return the found image transform
		return modelMatcher.getModelParameters().copy();
	}

	/**
	 * Detects features inside the two images and computes descriptions at those points.
	 */
	private static <T extends ImageGray<T>, TD extends TupleDesc<TD>>
	void describeImage( T image,
						DetectDescribePoint<T, TD> detDesc,
						List<Point2D_F64> points,
						DogArray<TD> listDescs ) {
		detDesc.detect(image);

		listDescs.reset();
		for (int i = 0; i < detDesc.getNumberOfFeatures(); i++) {
			points.add(detDesc.getLocation(i).copy());
			listDescs.grow().setTo(detDesc.getDescription(i));
		}
	}

	/**
	 * Given two input images create and display an image where the two have been overlayed on top of each other.
	 */
	public static <T extends ImageGray<T>>
	void stitch( BufferedImage imageA, BufferedImage imageB, Class<T> imageType ) {
		T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
		T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);

		// Detect using the standard SURF feature descriptor and describer
		DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
				new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null, null, imageType);
		ScoreAssociation<TupleDesc_F64> scorer = FactoryAssociation.scoreEuclidean(TupleDesc_F64.class, true);
		AssociateDescription<TupleDesc_F64> associate = FactoryAssociation.greedy(new ConfigAssociateGreedy(true, 2), scorer);

		// fit the images using a homography. This works well for rotations and distant objects.
		ModelMatcher<Homography2D_F64, AssociatedPair> modelMatcher =
				FactoryMultiViewRobust.homographyRansac(null, new ConfigRansac(60, 3));

		Homography2D_F64 H = computeTransform(inputA, inputB, detDesc, associate, modelMatcher);

		renderStitching(imageA, imageB, H);
	}

	/**
	 * Renders and displays the stitched together images
	 */
	public static void renderStitching( BufferedImage imageA, BufferedImage imageB,
										Homography2D_F64 fromAtoB ) {
		// specify size of output image
		double scale = 0.5;

		// Convert into a BoofCV color format
		Planar<GrayF32> colorA =
				ConvertBufferedImage.convertFromPlanar(imageA, null, true, GrayF32.class);
		Planar<GrayF32> colorB =
				ConvertBufferedImage.convertFromPlanar(imageB, null, true, GrayF32.class);

		// Where the output images are rendered into
		Planar<GrayF32> work = colorA.createSameShape();

		// Adjust the transform so that the whole image can appear inside of it
		Homography2D_F64 fromAToWork = new Homography2D_F64(scale, 0, colorA.width/4, 0, scale, colorA.height/4, 0, 0, 1);
		Homography2D_F64 fromWorkToA = fromAToWork.invert(null);

		// Used to render the results onto an image
		PixelTransformHomography_F32 model = new PixelTransformHomography_F32();
		InterpolatePixelS<GrayF32> interp = FactoryInterpolation.bilinearPixelS(GrayF32.class, BorderType.ZERO);
		ImageDistort<Planar<GrayF32>, Planar<GrayF32>> distort =
				DistortSupport.createDistortPL(GrayF32.class, model, interp, false);
		distort.setRenderAll(false);

		// Render first image
		model.setTo(fromWorkToA);
		distort.apply(colorA, work);

		// Render second image
		Homography2D_F64 fromWorkToB = fromWorkToA.concat(fromAtoB, null);
		model.setTo(fromWorkToB);
		distort.apply(colorB, work);

		// Convert the rendered image into a BufferedImage
		BufferedImage output = new BufferedImage(work.width, work.height, imageA.getType());
		ConvertBufferedImage.convertTo(work, output, true);

		Graphics2D g2 = output.createGraphics();

		// draw lines around the distorted image to make it easier to see
		Homography2D_F64 fromBtoWork = fromWorkToB.invert(null);
		Point2D_I32 corners[] = new Point2D_I32[4];
		corners[0] = renderPoint(0, 0, fromBtoWork);
		corners[1] = renderPoint(colorB.width, 0, fromBtoWork);
		corners[2] = renderPoint(colorB.width, colorB.height, fromBtoWork);
		corners[3] = renderPoint(0, colorB.height, fromBtoWork);

		g2.setColor(Color.ORANGE);
		g2.setStroke(new BasicStroke(4));
		g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
		g2.drawLine(corners[0].x, corners[0].y, corners[1].x, corners[1].y);
		g2.drawLine(corners[1].x, corners[1].y, corners[2].x, corners[2].y);
		g2.drawLine(corners[2].x, corners[2].y, corners[3].x, corners[3].y);
		g2.drawLine(corners[3].x, corners[3].y, corners[0].x, corners[0].y);

		ShowImages.showWindow(output, "Stitched Images", true);
	}

	private static Point2D_I32 renderPoint( int x0, int y0, Homography2D_F64 fromBtoWork ) {
		Point2D_F64 result = new Point2D_F64();
		HomographyPointOps_F64.transform(fromBtoWork, new Point2D_F64(x0, y0), result);
		return new Point2D_I32((int)result.x, (int)result.y);
	}

	public static void main( String[] args ) {
		BufferedImage imageA, imageB;
		imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/mountain_rotate_01.jpg"));
		imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/mountain_rotate_03.jpg"));
		stitch(imageA, imageB, GrayF32.class);
		imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/kayak_01.jpg"));
		imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("stitch/kayak_03.jpg"));
		stitch(imageA, imageB, GrayF32.class);
		imageA = UtilImageIO.loadImageNotNull(UtilIO.pathExample("scale/rainforest_01.jpg"));
		imageB = UtilImageIO.loadImageNotNull(UtilIO.pathExample("scale/rainforest_02.jpg"));
		stitch(imageA, imageB, GrayF32.class);
	}
}