Difference between revisions of "Example Calibration Target Pose"

From BoofCV
Jump to navigationJump to search
m
m
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[file:Example_calibration_pose.jpg|frame|center|Left: Last image in video sequence.  Right: Side view in 3D.  Green dots is the target's trajectory and black dots are calibration points in the last frame.]]
[[file:Example_calibration_pose.jpg|frame|center|Left: Last image in video sequence.  Right: Side view in 3D.  Green dots is the target's trajectory and black dots are calibration points in the last frame.]]


In addition to calibration, calibration targets can be used to estimate the pose of objects in the scene to a high degree of accuracy.  The location of calibration points can be estimated to a high degree of accuracy in the image making this approach more accurate than more generate purpose fidicuals.  How accurate is a function of distance and orientation.  
In addition to calibration, calibration targets can be used to estimate the pose of objects in the scene to a high degree of accuracy.  The location of calibration points can be estimated to a high degree of accuracy in the image making this approach more accurate than more generate purpose fiducials.  How accurate is a function of distance and orientation.
 


Example Code:
Example Code:
* [https://github.com/lessthanoptimal/BoofCV/blob/v0.18/examples/src/boofcv/examples/fiducial/ExamplePoseOfCalibrationTarget.java ExamplePoseOfCalibrationTarget.java]
* [https://github.com/lessthanoptimal/BoofCV/blob/v0.35/examples/src/main/java/boofcv/examples/fiducial/ExamplePoseOfCalibrationTarget.java ExamplePoseOfCalibrationTarget.java]


Concepts:
Concepts:
Line 13: Line 12:
Related Examples:
Related Examples:
* [[Example_Detect_Calibration_Target| Detecting Calibration Target]]
* [[Example_Detect_Calibration_Target| Detecting Calibration Target]]
Videos
* [https://youtu.be/qJWDK_FrgHE Fiducial Overview]


= Example Code =
= Example Code =
Line 18: Line 20:
<syntaxhighlight lang="java">
<syntaxhighlight lang="java">
/**
/**
  * The 6-DOF pose of calibration targets relative to the camera can be estimated very accurately once a camera
  * The 6-DOF pose of calibration targets can be estimated very accurately[*] once a camera has been calibrated.
* has been calibrated.  This example demonstrates how detect a calibration target and convert it into a rigid body
  * In this example the high level FiducialDetector interface is used with a chessboard calibration target to
  * transformation from target's frame into the camera's frame.  Orientation can be uniquely estimated for some
* process a video sequence. Once the pose of the target is known the location of each calibration point is
  * calibration grid patternsIf the pattern is symmetric then the pose can only be estimated up to the symmetry.
  * found in the camera frame and visualized.
  *
  * [*] Accuracy is dependent on a variety of factors. Calibration targets are primarily designed to be viewed up close
  * and their accuracy drops with range, as can be seen in this example.
  *
  *
  * @author Peter Abeles
  * @author Peter Abeles
Line 30: Line 35:


// Load camera calibration
// Load camera calibration
IntrinsicParameters intrinsic =
CameraPinholeBrown intrinsic =
UtilIO.loadXML("../data/applet/calibration/mono/Sony_DSC-HX5V_Chess/intrinsic.xml");
CalibrationIO.load(UtilIO.pathExample("calibration/mono/Sony_DSC-HX5V_Chess/intrinsic.yaml"));
 
LensDistortionNarrowFOV lensDistortion = new LensDistortionBrown(intrinsic);
int width = intrinsic.width; int height = intrinsic.height;


// load the video file
// load the video file
String fileName = "../data/applet/tracking/chessboard_SonyDSC_01.mjpeg";
String fileName = UtilIO.pathExample("tracking/chessboard_SonyDSC_01.mjpeg");
SimpleImageSequence<ImageFloat32> video =
SimpleImageSequence<GrayF32> video =
DefaultMediaManager.INSTANCE.openVideo(fileName, ImageType.single(ImageFloat32.class));
DefaultMediaManager.INSTANCE.openVideo(fileName, ImageType.single(GrayF32.class));
// DefaultMediaManager.INSTANCE.openCamera(null, 640, 480, ImageType.single(GrayF32.class));


// Detects the target and calibration point inside the target
// Let's use the FiducialDetector interface since it is much easier than coding up
PlanarCalibrationDetector detector = FactoryPlanarCalibrationTarget.detectorChessboard(new ConfigChessboard(5, 4));
// the entire thing ourselves.  Look at FiducialDetector's code if you want to understand how it works.
// Specifies the location of calibration points in the target's coordinate system. Note that z=0
CalibrationFiducialDetector<GrayF32> detector =
double sizeOfSquareInMeters = 0.03;
FactoryFiducial.calibChessboardX(null,new ConfigGridDimen(4, 5, 0.03),GrayF32.class);
PlanarCalibrationTarget target = FactoryPlanarCalibrationTarget.gridChess(5, 4, sizeOfSquareInMeters);
// Computes the homography
Zhang99ComputeTargetHomography computeH = new Zhang99ComputeTargetHomography(target);
// decomposes the homography
Zhang99DecomposeHomography decomposeH = new Zhang99DecomposeHomography();


// Need to remove lens distortion for accurate pose estimation
detector.setLensDistortion(lensDistortion,intrinsic.width,intrinsic.height);
PointTransform_F64 distortToUndistorted = LensDistortionOps.transformRadialToPixel_F64(intrinsic);


// convert the intrinsic into matrix format
// Get the 2D coordinate of calibration points for visualization purposes
DenseMatrix64F K = PerspectiveOps.calibrationMatrix(intrinsic,null);
List<Point2D_F64> calibPts = detector.getCalibrationPoints();


// Set up visualization
// Set up visualization
JPanel gui = new JPanel();
PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
PointCloudViewer viewer = new PointCloudViewer(intrinsic, 0.01);
viewer.setCameraHFov(PerspectiveOps.computeHFov(intrinsic));
viewer.setTranslationStep(0.01);
viewer.setBackgroundColor(0xFFFFFF); // white background
// make the view more interest.  From the side.
// make the view more interest.  From the side.
DenseMatrix64F rotY = RotationMatrixGenerator.rotY(-Math.PI/2.0,null);
DMatrixRMaj rotY = ConvertRotation3D_F64.rotY(-Math.PI/2.0,null);
viewer.setWorldToCamera(new Se3_F64(rotY,new Vector3D_F64(0.75,0,1.25)));
viewer.setCameraToWorld(new Se3_F64(rotY,new Vector3D_F64(0.75,0,1.25)).invert(null));
ImagePanel imagePanel = new ImagePanel(width, height);
ImagePanel imagePanel = new ImagePanel(intrinsic.width, intrinsic.height);
gui.add(BorderLayout.WEST, imagePanel); gui.add(BorderLayout.CENTER, viewer);
JComponent viewerComponent = viewer.getComponent();
ShowImages.showWindow(gui,"Calibration Target Pose");
viewerComponent.setPreferredSize(new Dimension(intrinsic.width,intrinsic.height));
PanelGridPanel gui = new PanelGridPanel(1,imagePanel,viewerComponent);
gui.setMaximumSize(gui.getPreferredSize());
ShowImages.showWindow(gui,"Calibration Target Pose",true);


// Allows the user to click on the image and pause
// Allows the user to click on the image and pause
Line 70: Line 74:


// saves the target's center location
// saves the target's center location
List<Point3D_F64> path = new ArrayList<Point3D_F64>();
List<Point3D_F64> path = new ArrayList<>();


// Process each frame in the video sequence
// Process each frame in the video sequence
Se3_F64 targetToCamera = new Se3_F64();
while( video.hasNext() ) {
while( video.hasNext() ) {


// detect calibration points
// detect calibration points
if( !detector.process(video.next()) )
detector.detect(video.next());
throw new RuntimeException("Failed to detect target");


// Remove lens distortion from detected calibration points
if( detector.totalFound() == 1 ) {
List<Point2D_F64> points = detector.getPoints();
detector.getFiducialToCamera(0, targetToCamera);
for( Point2D_F64 p : points ) {
distortToUndistorted.compute(p.x,p.y,p);
}


// Compute the homography
// Visualization.  Show a path with green points and the calibration points in black
if( !computeH.computeHomography(points) )
viewer.clearPoints();
throw new RuntimeException("Can't compute homography");


DenseMatrix64F H = computeH.getHomography();
Point3D_F64 center = new Point3D_F64();
SePointOps_F64.transform(targetToCamera, center, center);
path.add(center);


// compute camera pose from the homography matrix
for (Point3D_F64 p : path) {
decomposeH.setCalibrationMatrix(K);
viewer.addPoint(p.x, p.y, p.z, 0x00FF00);
Se3_F64 targetToCamera = decomposeH.decompose(H);
}
 
// Visualization.  Show a path with green points and the calibration points in black
viewer.reset();
 
Point3D_F64 center = new Point3D_F64();
SePointOps_F64.transform(targetToCamera,center,center);
path.add(center);
 
for( Point3D_F64 p : path ) {
viewer.addPoint(p.x,p.y,p.z,0x00FF00);
}


for( int j = 0; j < target.points.size(); j++ ) {
for (int j = 0; j < calibPts.size(); j++) {
Point2D_F64 p = target.points.get(j);
Point2D_F64 p = calibPts.get(j);
Point3D_F64 p3 = new Point3D_F64(p.x,p.y,0);
Point3D_F64 p3 = new Point3D_F64(p.x, p.y, 0);
SePointOps_F64.transform(targetToCamera,p3,p3);
SePointOps_F64.transform(targetToCamera, p3, p3);
viewer.addPoint(p3.x,p3.y,p3.z,0);
viewer.addPoint(p3.x, p3.y, p3.z, 0);
}
}
}


imagePanel.setBufferedImage((BufferedImage) video.getGuiImage());
imagePanel.setImage((BufferedImage) video.getGuiImage());
viewer.repaint();
viewerComponent.repaint();
imagePanel.repaint();
imagePanel.repaint();



Revision as of 18:01, 23 December 2019

Left: Last image in video sequence. Right: Side view in 3D. Green dots is the target's trajectory and black dots are calibration points in the last frame.

In addition to calibration, calibration targets can be used to estimate the pose of objects in the scene to a high degree of accuracy. The location of calibration points can be estimated to a high degree of accuracy in the image making this approach more accurate than more generate purpose fiducials. How accurate is a function of distance and orientation.

Example Code:

Concepts:

  • Calibration target
  • Pose estimation

Related Examples:

Videos

Example Code

/**
 * The 6-DOF pose of calibration targets can be estimated very accurately[*] once a camera has been calibrated.
 * In this example the high level FiducialDetector interface is used with a chessboard calibration target to
 * process a video sequence. Once the pose of the target is known the location of each calibration point is
 * found in the camera frame and visualized.
 *
 * [*] Accuracy is dependent on a variety of factors. Calibration targets are primarily designed to be viewed up close
 * and their accuracy drops with range, as can be seen in this example.
 *
 * @author Peter Abeles
 */
public class ExamplePoseOfCalibrationTarget {

	public static void main( String args[] ) {

		// Load camera calibration
		CameraPinholeBrown intrinsic =
				CalibrationIO.load(UtilIO.pathExample("calibration/mono/Sony_DSC-HX5V_Chess/intrinsic.yaml"));
		LensDistortionNarrowFOV lensDistortion = new LensDistortionBrown(intrinsic);

		// load the video file
		String fileName = UtilIO.pathExample("tracking/chessboard_SonyDSC_01.mjpeg");
		SimpleImageSequence<GrayF32> video =
				DefaultMediaManager.INSTANCE.openVideo(fileName, ImageType.single(GrayF32.class));
//				DefaultMediaManager.INSTANCE.openCamera(null, 640, 480, ImageType.single(GrayF32.class));

		// Let's use the FiducialDetector interface since it is much easier than coding up
		// the entire thing ourselves.  Look at FiducialDetector's code if you want to understand how it works.
		CalibrationFiducialDetector<GrayF32> detector =
				FactoryFiducial.calibChessboardX(null,new ConfigGridDimen(4, 5, 0.03),GrayF32.class);

		detector.setLensDistortion(lensDistortion,intrinsic.width,intrinsic.height);

		// Get the 2D coordinate of calibration points for visualization purposes
		List<Point2D_F64> calibPts = detector.getCalibrationPoints();

		// Set up visualization
		PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
		viewer.setCameraHFov(PerspectiveOps.computeHFov(intrinsic));
		viewer.setTranslationStep(0.01);
		viewer.setBackgroundColor(0xFFFFFF); // white background
		// make the view more interest.  From the side.
		DMatrixRMaj rotY = ConvertRotation3D_F64.rotY(-Math.PI/2.0,null);
		viewer.setCameraToWorld(new Se3_F64(rotY,new Vector3D_F64(0.75,0,1.25)).invert(null));
		ImagePanel imagePanel = new ImagePanel(intrinsic.width, intrinsic.height);
		JComponent viewerComponent = viewer.getComponent();
		viewerComponent.setPreferredSize(new Dimension(intrinsic.width,intrinsic.height));
		PanelGridPanel gui = new PanelGridPanel(1,imagePanel,viewerComponent);
		gui.setMaximumSize(gui.getPreferredSize());
		ShowImages.showWindow(gui,"Calibration Target Pose",true);

		// Allows the user to click on the image and pause
		MousePauseHelper pauseHelper = new MousePauseHelper(gui);

		// saves the target's center location
		List<Point3D_F64> path = new ArrayList<>();

		// Process each frame in the video sequence
		Se3_F64 targetToCamera = new Se3_F64();
		while( video.hasNext() ) {

			// detect calibration points
			detector.detect(video.next());

			if( detector.totalFound() == 1 ) {
				detector.getFiducialToCamera(0, targetToCamera);

				// Visualization.  Show a path with green points and the calibration points in black
				viewer.clearPoints();

				Point3D_F64 center = new Point3D_F64();
				SePointOps_F64.transform(targetToCamera, center, center);
				path.add(center);

				for (Point3D_F64 p : path) {
					viewer.addPoint(p.x, p.y, p.z, 0x00FF00);
				}

				for (int j = 0; j < calibPts.size(); j++) {
					Point2D_F64 p = calibPts.get(j);
					Point3D_F64 p3 = new Point3D_F64(p.x, p.y, 0);
					SePointOps_F64.transform(targetToCamera, p3, p3);
					viewer.addPoint(p3.x, p3.y, p3.z, 0);
				}
			}

			imagePanel.setImage((BufferedImage) video.getGuiImage());
			viewerComponent.repaint();
			imagePanel.repaint();

			BoofMiscOps.pause(30);
			while( pauseHelper.isPaused() ) {
				BoofMiscOps.pause(30);
			}
		}
	}
}