Example Visual Odometry Monocular Plane
From BoofCV
Jump to navigationJump to searchThis example demonstrates how to estimate the camera's ego motion using a single camera and known plane. Since the plane's relative location to the camera is known there is no scale ambiguity, like there is with a more general single camera solution.
Example Code:
Concepts:
- Plane/Homography
Relevant Applets:
Related Examples:
Example Code
/**
* Bare bones example showing how to estimate the camera's ego-motion using a single camera and a known
* plane. Additional information on the scene can be optionally extracted from the algorithm,
* if it implements AccessPointTracks3D.
*
* @author Peter Abeles
*/
public class ExampleVisualOdometryMonocularPlane {
public static void main( String args[] ) {
MediaManager media = DefaultMediaManager.INSTANCE;
String directory = "../data/applet/vo/drc/";
// load camera description and the video sequence
MonoPlaneParameters calibration = BoofMiscOps.loadXML(media.openFile(directory + "mono_plane.xml"));
SimpleImageSequence<ImageUInt8> video = media.openVideo(directory + "left.mjpeg", ImageDataType.single(ImageUInt8.class));
// specify how the image features are going to be tracked
PkltConfig<ImageUInt8, ImageSInt16> configKlt = PkltConfig.createDefault(ImageUInt8.class, ImageSInt16.class);
configKlt.pyramidScaling = new int[]{1, 2, 4, 8};
configKlt.templateRadius = 3;
PointTrackerTwoPass<ImageUInt8> tracker =
FactoryPointTrackerTwoPass.klt(configKlt, new ConfigGeneralDetector(600, 3, 1));
// declares the algorithm
MonocularPlaneVisualOdometry<ImageUInt8> visualOdometry =
FactoryVisualOdometry.monoPlaneInfinity(75, 2, 1.5, 200, tracker, ImageDataType.single(ImageUInt8.class));
// Pass in intrinsic/extrinsic calibration. This can be changed in the future.
visualOdometry.setCalibration(calibration);
// Process the video sequence and output the location plus number of inliers
while( video.hasNext() ) {
ImageUInt8 left = video.next();
if( !visualOdometry.process(left) ) {
throw new RuntimeException("VO Failed!");
}
Se3_F64 leftToWorld = visualOdometry.getCameraToWorld();
Vector3D_F64 T = leftToWorld.getT();
System.out.printf("Location %8.2f %8.2f %8.2f inliers %s\n", T.x, T.y, T.z, inlierPercent(visualOdometry));
}
}
/**
* If the algorithm implements AccessPointTracks3D, then count the number of inlier features
* and return a string.
*/
public static String inlierPercent(VisualOdometry<?> alg) {
if( !(alg instanceof AccessPointTracks3D))
return "";
AccessPointTracks3D access = (AccessPointTracks3D)alg;
int count = 0;
int N = access.getAllTracks().size();
for( int i = 0; i < N; i++ ) {
if( access.isInlier(i) )
count++;
}
return String.format("%%%5.3f", 100.0 * count / N);
}
}