Example Stereo Visual Odometry

From BoofCV
Revision as of 05:58, 5 December 2012 by Peter (talk | contribs)
Jump to navigationJump to search

Stereo visual odometry estimates the camera's egomotion using a pair of calibrated cameras. Stereo camera systems are inheritly more stable than monocular ones because the stereo pair provides good triangulation of image features and resolves the scale ambiguity. The example below shows how to use a high level interface with visual odometyr algorithms. The basic high level interface hides internal data structures which are useful in many applications, which is why an optional interface is provided for accessing some of those structures.f point features and model fitting is required.

Example File: ExampleStereoVisualOdometry.java

Concepts:

  • Structure from Motion
  • Geometric Vision
  • Feature Tracking

Relevant Applets:

Related Tutorials/Example Code:

/**
 * Bare bones example showing how to estimate the camera's ego-motion using a stereo camera system. Additional
 * information on the scene can be optionally extracted from the algorithm if it implements AccessPointTracks3D.
 *
 * @author Peter Abeles
 */
public class ExampleStereoVisualOdometry {

	public static void main( String args[] ) {

		MediaManager media = DefaultMediaManager.INSTANCE;

		String directory = "../data/applet/vo/backyard/";

		// load camera description and the video sequence
		StereoParameters config = BoofMiscOps.loadXML(media.openFile(directory+"stereo.xml"));
		SimpleImageSequence<ImageUInt8> video1 = media.openVideo(directory+"left.mjpeg",ImageUInt8.class);
		SimpleImageSequence<ImageUInt8> video2 = media.openVideo(directory+"right.mjpeg",ImageUInt8.class);

		// specify how the image features are going to be tracked
		ImagePointTracker<ImageUInt8> tracker =
				FactoryPointSequentialTracker.klt(600,new int[]{1,2,4,8},3,3,2,ImageUInt8.class, ImageSInt16.class);

		// computes the depth of each point
		StereoDisparitySparse<ImageUInt8> disparity =
				FactoryStereoDisparity.regionSparseWta(0, 150, 3, 3, 30, -1, true, ImageUInt8.class);

		// declares the algorithm
		StereoVisualOdometry<ImageUInt8> visualOdometry = FactoryVisualOdometry.stereoDepth(120, 2,
				1.5, tracker, disparity, 0, ImageUInt8.class);

		// Pass in intrinsic/extrinsic calibration.  This can be changed in the future.
		visualOdometry.setCalibration(config);

		// Process the video sequence and output the location plus number of inliers
		while( video1.hasNext() ) {
			ImageUInt8 left = video1.next();
			ImageUInt8 right = video2.next();

			if( !visualOdometry.process(left,right) ) {
				throw new RuntimeException("VO Failed!");
			}

			Se3_F64 leftToWorld = visualOdometry.getLeftToWorld();
			Vector3D_F64 T = leftToWorld.getT();

			System.out.printf("Location %8.2f %8.2f %8.2f      inliers %s\n", T.x, T.y, T.z,countInliers(visualOdometry));
		}
	}

	/**
	 * If the algorithm implements AccessPointTracks3D, then count the number of inlier features
	 * and return a string.
	 */
	public static String countInliers( StereoVisualOdometry alg ) {
		if( !(alg instanceof AccessPointTracks3D))
			return "";

		AccessPointTracks3D access = (AccessPointTracks3D)alg;

		int count = 0;
		int N = access.getAllTracks().size();
		for( int i = 0; i < N; i++ ) {
			if( access.isInlier(i) )
				count++;
		}

		return Integer.toString(count);
	}
}