Difference between revisions of "Example Image Stitching"

From BoofCV
Jump to navigationJump to search
m
m
(27 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Image Stitching Example =
<center>
<center>
<gallery caption="Images stitched together using example code" heights=250 widths=250 >
<gallery caption="Images stitched together using example code" heights=250 widths=250 >
Line 9: Line 7:
</center>
</center>


Image stitching refers to combining two or more overlapping images together into a single large image.  The goal is to find transforms which minimize the error in overlapping regions and provide a smooth transition between images. There are many different ways in which image stitching can be done, what is discussed here are point based methods.  To understand this tutorial a basic understanding of point features and model fitting is required.
This example is designed to demonstrate several aspects of BoofCV by stitching images together.  Image stitching refers to combining two or more overlapping images together into a single large image.  When stitching images together the goal is to find a 2D geometric transform which minimize the error (difference in appearance) in overlapping regions. There are many ways to do this, in the example below point image features are found, associated, and then a 2D transform is found robustly using the associated features.


Example File: [https://github.com/lessthanoptimal/BoofCV/blob/master/examples/src/boofcv/examples/ImageStitchingExample.java ImageStitchingExample.java]
Example File: [https://github.com/lessthanoptimal/BoofCV/blob/v0.36/examples/src/main/java/boofcv/examples/geometry/ExampleImageStitching.java ExampleImageStitching.java]


Concepts:
Concepts:
Line 20: Line 18:
* Homography
* Homography


Relevant Applets:
Relevant Videos:
* [[Applet_Associate_Points| Associate Points]]
* [http://www.youtube.com/watch?v=59RJeLlDAxQ YouTube Video]
* [[Applet_Description_Region| Region Description]]
 
 
Related Examples:
* [[Example_Associate_Interest_Points| Associate Interest Points]]
* [[Example_Video_Mosaic| Video Mosaic]]


= Algorithm Introduction =
= Algorithm Introduction =
Line 28: Line 30:
Described at a high level this image stitching algorithm can be summarized as follows:
Described at a high level this image stitching algorithm can be summarized as follows:


# Detect feature locations
# Detect and describe point features
# Compute feature descriptors
# Associate features together
# Associate features together
# Robust fitting to find transform
# Robust fitting to find transform
Line 36: Line 37:
The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily.  Output examples are shown at the top of this page.
The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily.  Output examples are shown at the top of this page.


= Abstracted Code =
= Example Code =
 
<syntaxhighlight lang="java">
The code algorithm summarized in the previous section is now shown in code.  Java generics are used to abstract away the input image type as well as the type of models being used to describe the image motion.  Checks are done to make sure compatible detector and describes have been provided.
public class ExampleImageStitching {


<syntaxhighlight lang="java">
/**
/**
* Using abstracted code, find a transform which minimises the difference between corresponding features
* Using abstracted code, find a transform which minimizes the difference between corresponding features
* in both images.  This code is completely model independent and is the core algorithms.
* in both images.  This code is completely model independent and is the core algorithms.
*/
*/
public static<T extends ImageBase> Homography2D_F64
public static<T extends ImageGray<T>, FD extends TupleDesc> Homography2D_F64
computeTransform( T imageA , T imageB ,
computeTransform( T imageA , T imageB ,
  InterestPointDetector<T> detector ,
  DetectDescribePoint<T,FD> detDesc ,
  DescribeRegionPoint<T> describe ,
  AssociateDescription<FD> associate ,
  GeneralAssociation<TupleDesc_F64> associate ,
  ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher )
  ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher )
{
{
// see if the detector has everything that the describer needs
if( describe.requiresOrientation() && !detector.hasOrientation() )
throw new IllegalArgumentException("Requires orientation be provided.");
if( describe.requiresScale() && !detector.hasScale() )
throw new IllegalArgumentException("Requires scale be provided.");
// get the length of the description
// get the length of the description
int descriptionDOF = describe.getDescriptionLength();
List<Point2D_F64> pointsA = new ArrayList<>();
 
FastQueue<FD> descA = UtilFeature.createQueue(detDesc,100);
List<Point2D_F64> pointsA = new ArrayList<Point2D_F64>();
List<Point2D_F64> pointsB = new ArrayList<>();
FastQueue<TupleDesc_F64> descA = new TupleDescQueue(descriptionDOF,true);
FastQueue<FD> descB = UtilFeature.createQueue(detDesc,100);
List<Point2D_F64> pointsB = new ArrayList<Point2D_F64>();
FastQueue<TupleDesc_F64> descB = new TupleDescQueue(descriptionDOF,true);


// extract feature locations and descriptions from each image
// extract feature locations and descriptions from each image
describeImage(imageA, detector, describe, pointsA, descA);
describeImage(imageA, detDesc, pointsA, descA);
describeImage(imageB, detector, describe, pointsB, descB);
describeImage(imageB, detDesc, pointsB, descB);


// Associate features between the two images
// Associate features between the two images
associate.associate(descA,descB);
associate.setSource(descA);
associate.setDestination(descB);
associate.associate();


// create a list of AssociatedPairs that tell the model matcher how a feature moved
// create a list of AssociatedPairs that tell the model matcher how a feature moved
FastQueue<AssociatedIndex> matches = associate.getMatches();
FastAccess<AssociatedIndex> matches = associate.getMatches();
List<AssociatedPair> pairs = new ArrayList<AssociatedPair>();
List<AssociatedPair> pairs = new ArrayList<>();


for( int i = 0; i < matches.size(); i++ ) {
for( int i = 0; i < matches.size(); i++ ) {
Line 87: Line 80:


// find the best fit model to describe the change between these images
// find the best fit model to describe the change between these images
if( !modelMatcher.process(pairs,null) )
if( !modelMatcher.process(pairs) )
throw new RuntimeException("Model Matcher failed!");
throw new RuntimeException("Model Matcher failed!");


// return the found image transform
// return the found image transform
return modelMatcher.getModel();
return modelMatcher.getModelParameters().copy();
}
}
</syntaxhighlight>


Each image is described as a set of detected interest points and feature descriptions for those interest points.  In this function points are detected and then descriptions extracted.
<syntaxhighlight lang="java">
/**
/**
* Detects features inside the two images and computes descriptions at those points.
* Detects features inside the two images and computes descriptions at those points.
*/
*/
private static <T extends ImageBase> void describeImage(T image,
private static <T extends ImageGray<T>, FD extends TupleDesc>
InterestPointDetector<T> detector,
void describeImage(T image,
DescribeRegionPoint<T> describe,
  DetectDescribePoint<T,FD> detDesc,
List<Point2D_F64> points,
  List<Point2D_F64> points,
FastQueue<TupleDesc_F64> descs)  
  FastQueue<FD> listDescs) {
{
detDesc.detect(image);
detector.detect(image);
describe.setImage(image);


descs.reset();
listDescs.reset();
TupleDesc_F64 desc = descs.pop();
for( int i = 0; i < detDesc.getNumberOfFeatures(); i++ ) {
for( int i = 0; i < detector.getNumberOfFeatures(); i++ ) {
points.add( detDesc.getLocation(i).copy() );
// get the feature location info
listDescs.grow().setTo(detDesc.getDescription(i));
Point2D_F64 p = detector.getLocation(i);
double yaw = detector.getOrientation(i);
double scale = detector.getScale(i);
 
// extract the description and save the results into the provided description
if( describe.process(p.x,p.y,yaw,scale,desc) != null ) {
points.add(p.copy());
desc = descs.pop();
}
}
}
// remove the last element from the queue, which has not been used.
descs.removeTail();
}
}
</syntaxhighlight>


= Declaration of Specific Algorithms =
/**
* Given two input images create and display an image where the two have been overlayed on top of each other.
*/
public static <T extends ImageGray<T>>
void stitch( BufferedImage imageA , BufferedImage imageB , Class<T> imageType )
{
T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);


In the previous section abstracted code was used to detect and associate features. In this function the specific algorithms which are passed in are defined. This is also where one could change the type of descriptor used to see how that affects performance.
// Detect using the standard SURF feature descriptor and describer
DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null,null, imageType);
ScoreAssociation<BrightFeature> scorer = FactoryAssociation.scoreEuclidean(BrightFeature.class,true);
AssociateDescription<BrightFeature> associate = FactoryAssociation.greedy(new ConfigAssociateGreedy(true,2),scorer);


A [http://en.wikipedia.org/wiki/Homography homography] is used to describe the transform between the images.  Homographies assume that all features lie on a planeWhile this might sound overly restrictive it is a good model when dealing with objects that are far away or when rotating. While a homography should be a good fit for these images, this model does not take in account lens distortion and other physical affects which introduces some artifacts.
// fit the images using a homographyThis works well for rotations and distant objects.
ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher =
FactoryMultiViewRobust.homographyRansac(null,new ConfigRansac(60,3));


Homography2D_F64 H = computeTransform(inputA, inputB, detDesc, associate, modelMatcher);
renderStitching(imageA,imageB,H);
}


<syntaxhighlight lang="java">
/**
/**
* Given two input images create and display an image where the two have been overlayed on top of each other.
* Renders and displays the stitched together images
*/
*/
public static <T extends ImageBase> void stitch( BufferedImage imageA , BufferedImage imageB ,
public static void renderStitching( BufferedImage imageA, BufferedImage imageB ,
Class<T> imageType )
Homography2D_F64 fromAtoB )
{
{
T inputA = ConvertBufferedImage.convertFrom(imageA , null, imageType);
// specify size of output image
T inputB = ConvertBufferedImage.convertFrom(imageB, null, imageType);
double scale = 0.5;
 
// Convert into a BoofCV color format
Planar<GrayF32> colorA =
ConvertBufferedImage.convertFromPlanar(imageA, null, true, GrayF32.class);
Planar<GrayF32> colorB =
ConvertBufferedImage.convertFromPlanar(imageB, null,true, GrayF32.class);
 
// Where the output images are rendered into
Planar<GrayF32> work = colorA.createSameShape();
 
// Adjust the transform so that the whole image can appear inside of it
Homography2D_F64 fromAToWork = new Homography2D_F64(scale,0,colorA.width/4,0,scale,colorA.height/4,0,0,1);
Homography2D_F64 fromWorkToA = fromAToWork.invert(null);
 
// Used to render the results onto an image
PixelTransformHomography_F32 model = new PixelTransformHomography_F32();
InterpolatePixelS<GrayF32> interp = FactoryInterpolation.bilinearPixelS(GrayF32.class, BorderType.ZERO);
ImageDistort<Planar<GrayF32>,Planar<GrayF32>> distort =
DistortSupport.createDistortPL(GrayF32.class, model, interp, false);
distort.setRenderAll(false);
 
// Render first image
model.set(fromWorkToA);
distort.apply(colorA,work);
 
// Render second image
Homography2D_F64 fromWorkToB = fromWorkToA.concat(fromAtoB,null);
model.set(fromWorkToB);
distort.apply(colorB,work);


// Detect using the standard SURF feature descriptor and describer
// Convert the rendered image into a BufferedImage
InterestPointDetector<T> detector = FactoryInterestPoint.fromFastHessian(400,9,4,4);
BufferedImage output = new BufferedImage(work.width,work.height,imageA.getType());
DescribeRegionPoint<T> describe = FactoryDescribeRegionPoint.surf(true,imageType);
ConvertBufferedImage.convertTo(work,output,true);
GeneralAssociation<TupleDesc_F64> associate = FactoryAssociation.greedy(new ScoreAssociateEuclideanSq(),2,-1,true);
 
Graphics2D g2 = output.createGraphics();
 
// draw lines around the distorted image to make it easier to see
Homography2D_F64 fromBtoWork = fromWorkToB.invert(null);
Point2D_I32 corners[] = new Point2D_I32[4];
corners[0] = renderPoint(0,0,fromBtoWork);
corners[1] = renderPoint(colorB.width,0,fromBtoWork);
corners[2] = renderPoint(colorB.width,colorB.height,fromBtoWork);
corners[3] = renderPoint(0,colorB.height,fromBtoWork);
 
g2.setColor(Color.ORANGE);
g2.setStroke(new BasicStroke(4));
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g2.drawLine(corners[0].x,corners[0].y,corners[1].x,corners[1].y);
g2.drawLine(corners[1].x,corners[1].y,corners[2].x,corners[2].y);
g2.drawLine(corners[2].x,corners[2].y,corners[3].x,corners[3].y);
g2.drawLine(corners[3].x,corners[3].y,corners[0].x,corners[0].y);


// fit the images using a homography. This works well for rotations and distant objects.
ShowImages.showWindow(output,"Stitched Images", true);
ModelFitterLinearHomography modelFitter = new ModelFitterLinearHomography();
}
DistanceHomographySq distance = new DistanceHomographySq();
int minSamples = modelFitter.getMinimumPoints();
ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher = new SimpleInlierRansac<Homography2D_F64,AssociatedPair>(123,modelFitter,distance,60,minSamples,30,1000,9);


Homography2D_F64 H = computeTransform(inputA, inputB, detector, describe, associate, modelMatcher);
private static Point2D_I32 renderPoint( int x0 , int y0 , Homography2D_F64 fromBtoWork )
{
Point2D_F64 result = new Point2D_F64();
HomographyPointOps_F64.transform(fromBtoWork, new Point2D_F64(x0, y0), result);
return new Point2D_I32((int)result.x,(int)result.y);
}


// draw the results
public static void main( String args[] ) {
HomographyStitchPanel panel = new HomographyStitchPanel(0.5,inputA.width,inputA.height);
BufferedImage imageA,imageB;
panel.configure(imageA,imageB,H);
imageA = UtilImageIO.loadImage(UtilIO.pathExample("stitch/mountain_rotate_01.jpg"));
ShowImages.showWindow(panel,"Stitched Images");
imageB = UtilImageIO.loadImage(UtilIO.pathExample("stitch/mountain_rotate_03.jpg"));
stitch(imageA,imageB, GrayF32.class);
imageA = UtilImageIO.loadImage(UtilIO.pathExample("stitch/kayak_01.jpg"));
imageB = UtilImageIO.loadImage(UtilIO.pathExample("stitch/kayak_03.jpg"));
stitch(imageA,imageB, GrayF32.class);
imageA = UtilImageIO.loadImage(UtilIO.pathExample("scale/rainforest_01.jpg"));
imageB = UtilImageIO.loadImage(UtilIO.pathExample("scale/rainforest_02.jpg"));
stitch(imageA,imageB, GrayF32.class);
}
}
}
</syntaxhighlight>
</syntaxhighlight>

Revision as of 22:10, 17 May 2020

This example is designed to demonstrate several aspects of BoofCV by stitching images together. Image stitching refers to combining two or more overlapping images together into a single large image. When stitching images together the goal is to find a 2D geometric transform which minimize the error (difference in appearance) in overlapping regions. There are many ways to do this, in the example below point image features are found, associated, and then a 2D transform is found robustly using the associated features.

Example File: ExampleImageStitching.java

Concepts:

  • Interest point detection
  • Region descriptions
  • Feature association
  • Robust model fitting
  • Homography

Relevant Videos:


Related Examples:

Algorithm Introduction

Described at a high level this image stitching algorithm can be summarized as follows:

  1. Detect and describe point features
  2. Associate features together
  3. Robust fitting to find transform
  4. Render combined image

The core algorithm has been coded up using abstracted code which allows different models and algorithms to be changed easily. Output examples are shown at the top of this page.

Example Code

public class ExampleImageStitching {

	/**
	 * Using abstracted code, find a transform which minimizes the difference between corresponding features
	 * in both images.  This code is completely model independent and is the core algorithms.
	 */
	public static<T extends ImageGray<T>, FD extends TupleDesc> Homography2D_F64
	computeTransform( T imageA , T imageB ,
					  DetectDescribePoint<T,FD> detDesc ,
					  AssociateDescription<FD> associate ,
					  ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher )
	{
		// get the length of the description
		List<Point2D_F64> pointsA = new ArrayList<>();
		FastQueue<FD> descA = UtilFeature.createQueue(detDesc,100);
		List<Point2D_F64> pointsB = new ArrayList<>();
		FastQueue<FD> descB = UtilFeature.createQueue(detDesc,100);

		// extract feature locations and descriptions from each image
		describeImage(imageA, detDesc, pointsA, descA);
		describeImage(imageB, detDesc, pointsB, descB);

		// Associate features between the two images
		associate.setSource(descA);
		associate.setDestination(descB);
		associate.associate();

		// create a list of AssociatedPairs that tell the model matcher how a feature moved
		FastAccess<AssociatedIndex> matches = associate.getMatches();
		List<AssociatedPair> pairs = new ArrayList<>();

		for( int i = 0; i < matches.size(); i++ ) {
			AssociatedIndex match = matches.get(i);

			Point2D_F64 a = pointsA.get(match.src);
			Point2D_F64 b = pointsB.get(match.dst);

			pairs.add( new AssociatedPair(a,b,false));
		}

		// find the best fit model to describe the change between these images
		if( !modelMatcher.process(pairs) )
			throw new RuntimeException("Model Matcher failed!");

		// return the found image transform
		return modelMatcher.getModelParameters().copy();
	}

	/**
	 * Detects features inside the two images and computes descriptions at those points.
	 */
	private static <T extends ImageGray<T>, FD extends TupleDesc>
	void describeImage(T image,
					   DetectDescribePoint<T,FD> detDesc,
					   List<Point2D_F64> points,
					   FastQueue<FD> listDescs) {
		detDesc.detect(image);

		listDescs.reset();
		for( int i = 0; i < detDesc.getNumberOfFeatures(); i++ ) {
			points.add( detDesc.getLocation(i).copy() );
			listDescs.grow().setTo(detDesc.getDescription(i));
		}
	}

	/**
	 * Given two input images create and display an image where the two have been overlayed on top of each other.
	 */
	public static <T extends ImageGray<T>>
	void stitch( BufferedImage imageA , BufferedImage imageB , Class<T> imageType )
	{
		T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
		T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);

		// Detect using the standard SURF feature descriptor and describer
		DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
				new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null,null, imageType);
		ScoreAssociation<BrightFeature> scorer = FactoryAssociation.scoreEuclidean(BrightFeature.class,true);
		AssociateDescription<BrightFeature> associate = FactoryAssociation.greedy(new ConfigAssociateGreedy(true,2),scorer);

		// fit the images using a homography.  This works well for rotations and distant objects.
		ModelMatcher<Homography2D_F64,AssociatedPair> modelMatcher =
				FactoryMultiViewRobust.homographyRansac(null,new ConfigRansac(60,3));

		Homography2D_F64 H = computeTransform(inputA, inputB, detDesc, associate, modelMatcher);

		renderStitching(imageA,imageB,H);
	}

	/**
	 * Renders and displays the stitched together images
	 */
	public static void renderStitching( BufferedImage imageA, BufferedImage imageB ,
										Homography2D_F64 fromAtoB )
	{
		// specify size of output image
		double scale = 0.5;

		// Convert into a BoofCV color format
		Planar<GrayF32> colorA =
				ConvertBufferedImage.convertFromPlanar(imageA, null, true, GrayF32.class);
		Planar<GrayF32> colorB =
				ConvertBufferedImage.convertFromPlanar(imageB, null,true, GrayF32.class);

		// Where the output images are rendered into
		Planar<GrayF32> work = colorA.createSameShape();

		// Adjust the transform so that the whole image can appear inside of it
		Homography2D_F64 fromAToWork = new Homography2D_F64(scale,0,colorA.width/4,0,scale,colorA.height/4,0,0,1);
		Homography2D_F64 fromWorkToA = fromAToWork.invert(null);

		// Used to render the results onto an image
		PixelTransformHomography_F32 model = new PixelTransformHomography_F32();
		InterpolatePixelS<GrayF32> interp = FactoryInterpolation.bilinearPixelS(GrayF32.class, BorderType.ZERO);
		ImageDistort<Planar<GrayF32>,Planar<GrayF32>> distort =
				DistortSupport.createDistortPL(GrayF32.class, model, interp, false);
		distort.setRenderAll(false);

		// Render first image
		model.set(fromWorkToA);
		distort.apply(colorA,work);

		// Render second image
		Homography2D_F64 fromWorkToB = fromWorkToA.concat(fromAtoB,null);
		model.set(fromWorkToB);
		distort.apply(colorB,work);

		// Convert the rendered image into a BufferedImage
		BufferedImage output = new BufferedImage(work.width,work.height,imageA.getType());
		ConvertBufferedImage.convertTo(work,output,true);

		Graphics2D g2 = output.createGraphics();

		// draw lines around the distorted image to make it easier to see
		Homography2D_F64 fromBtoWork = fromWorkToB.invert(null);
		Point2D_I32 corners[] = new Point2D_I32[4];
		corners[0] = renderPoint(0,0,fromBtoWork);
		corners[1] = renderPoint(colorB.width,0,fromBtoWork);
		corners[2] = renderPoint(colorB.width,colorB.height,fromBtoWork);
		corners[3] = renderPoint(0,colorB.height,fromBtoWork);

		g2.setColor(Color.ORANGE);
		g2.setStroke(new BasicStroke(4));
		g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
		g2.drawLine(corners[0].x,corners[0].y,corners[1].x,corners[1].y);
		g2.drawLine(corners[1].x,corners[1].y,corners[2].x,corners[2].y);
		g2.drawLine(corners[2].x,corners[2].y,corners[3].x,corners[3].y);
		g2.drawLine(corners[3].x,corners[3].y,corners[0].x,corners[0].y);

		ShowImages.showWindow(output,"Stitched Images", true);
	}

	private static Point2D_I32 renderPoint( int x0 , int y0 , Homography2D_F64 fromBtoWork )
	{
		Point2D_F64 result = new Point2D_F64();
		HomographyPointOps_F64.transform(fromBtoWork, new Point2D_F64(x0, y0), result);
		return new Point2D_I32((int)result.x,(int)result.y);
	}

	public static void main( String args[] ) {
		BufferedImage imageA,imageB;
		imageA = UtilImageIO.loadImage(UtilIO.pathExample("stitch/mountain_rotate_01.jpg"));
		imageB = UtilImageIO.loadImage(UtilIO.pathExample("stitch/mountain_rotate_03.jpg"));
		stitch(imageA,imageB, GrayF32.class);
		imageA = UtilImageIO.loadImage(UtilIO.pathExample("stitch/kayak_01.jpg"));
		imageB = UtilImageIO.loadImage(UtilIO.pathExample("stitch/kayak_03.jpg"));
		stitch(imageA,imageB, GrayF32.class);
		imageA = UtilImageIO.loadImage(UtilIO.pathExample("scale/rainforest_01.jpg"));
		imageB = UtilImageIO.loadImage(UtilIO.pathExample("scale/rainforest_02.jpg"));
		stitch(imageA,imageB, GrayF32.class);
	}
}