Class PnPLepetitEPnP

java.lang.Object
boofcv.alg.geo.pose.PnPLepetitEPnP

public class PnPLepetitEPnP extends Object

Implementation of the EPnP algorithm from [1] for solving the PnP problem when N ≥ 5 for the general case and N ≥ 4 for planar data (see note below). Given a calibrated camera, n pairs of 2D point observations and the known 3D world coordinates, it solves the for camera's pose.. This solution is non-iterative and claims to be much faster and more accurate than the alternatives. Works for both planar and non-planar configurations.

Expresses the N 3D point as a weighted sum of four virtual control points. Problem then becomes to estimate the coordinates of the control points in the camera referential, which can be done in O(n) time.

After estimating the control points the solution can be refined using Gauss Newton non-linear optimization. The objective function being optimizes reduces the difference between world and camera control point distances by adjusting the values of beta. Optimization is very fast, but can degrade accuracy if over optimized. See warning below. To turn on non-linear optimization set setNumIterations(int) to a positive number.

After experimentation there doesn't seem to be any universally best way to choose the control point distribution. To allow tuning for specific problems the 'magic number' has been provided. Larger values increase the control point distribution's size. In general smaller numbers appear to be better for noisier data, but can degrade results if too small.

MINIMUM POINTS: According to the original paper [1] only 4 points are needed for the general case. In practice it produces very poor results on average. The matlab code provided by the original author also exhibits instability in the minimum case. The article also does not show results for the minimum case, making it hard to judge what should be expected. However, I'm not ruling out there being a bug in relinearization. See that code for comments. Correspondence with the original author indicates that he did not expect results from relinearization to be as accurate as the other cases.

NOTES: This implementation deviates in some minor ways from what was described in the paper. However, their own example code (Matlab and C++) are mutually different in significant ways too. See how solutions are scored, linear systems are solved, and how world control points are computed. How control points are computed here is inspired from their C++ example (technique used in their matlab example has some stability issues), but there is probably room for more improvement.

WARNING: Setting the number of optimization iterations too high can actually degrade accuracy. The objective function being minimized is not observation residuals. Locally it appears to be a good approximation, but can diverge and actually produce worse results. Because of this behavior, more advanced optimization routines are unnecessary and counter productive.

[1] Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua, "EPnP: An Accurate O(n) Solution to the PnP Problem" Int. J. Comput. Visionm, vol 81, issue 2, 2009

  • Field Details

  • Constructor Details

    • PnPLepetitEPnP

      public PnPLepetitEPnP()
      Constructor which uses the default magic number
    • PnPLepetitEPnP

      public PnPLepetitEPnP(double magicNumber)
      Constructor which allows configuration of the magic number.
      Parameters:
      magicNumber - Magic number changes distribution of world control points. Values less than one seem to work best. Try 0.1
  • Method Details

    • setNumIterations

      public void setNumIterations(int numIterations)
      Used to turn on and off non-linear optimization. To turn on set to a positive number. See warning in class description about setting the number of iterations too high.
      Parameters:
      numIterations - Number of iterations. Try 10.
    • process

      public void process(List<Point3D_F64> worldPts, List<Point2D_F64> observed, Se3_F64 solutionModel)
      Compute camera motion given a set of features with observations and 3D locations
      Parameters:
      worldPts - Known location of features in 3D world coordinates
      observed - Observed location of features in normalized camera coordinates
      solutionModel - Output: Storage for the found solution.
    • selectWorldControlPoints

      public void selectWorldControlPoints(List<Point3D_F64> worldPts, DogArray<Point3D_F64> controlWorldPts)
      Selects control points along the data's axis and the data's centroid. If the data is determined to be planar then only 3 control points are selected. The data's axis is determined by computing the covariance matrix then performing SVD. The axis is contained along the
    • computeBarycentricCoordinates

      protected void computeBarycentricCoordinates(DogArray<Point3D_F64> controlWorldPts, DMatrixRMaj alphas, List<Point3D_F64> worldPts)

      Given the control points it computes the 4 weights for each camera point. This is done by solving the following linear equation: C*α=X. where C is the control point matrix, α is the 4 by n matrix containing the solution, and X is the camera point matrix. N is the number of points.

      C = [ controlPts' ; ones(1,4) ]
      X = [ cameraPts' ; ones(1,N) ]

    • constructM

      protected static void constructM(List<Point2D_F64> obsPts, DMatrixRMaj alphas, DMatrixRMaj M)
      Constructs the linear system which is to be solved. sum a_ij*x_j - a_ij*u_i*z_j = 0 sum a_ij*y_j - a_ij*v_i*z_j = 0 where (x,y,z) is the control point to be solved for. (u,v) is the observed normalized point
    • extractNullPoints

      protected void extractNullPoints(DMatrixRMaj M)
      Computes M'*M and finds the null space. The 4 eigenvectors with the smallest eigenvalues are found and the null points extracted from them.
    • matchScale

      protected double matchScale(List<Point3D_F64> nullPts, DogArray<Point3D_F64> controlWorldPts)
      Examines the distance each point is from the centroid to determine the scaling difference between world control points and the null points.
    • estimateCase1

      protected void estimateCase1(double[] betas)
      Simple analytical solution. Just need to solve for the scale difference in one set of potential control points.
    • estimateCase2

      protected void estimateCase2(double[] betas)
    • estimateCase3

      protected void estimateCase3(double[] betas)
    • estimateCase3_planar

      protected void estimateCase3_planar(double[] betas)
      If the data is planar use relinearize to estimate betas
    • estimateCase4

      protected void estimateCase4(double[] betas)
    • getMinPoints

      public int getMinPoints()
      Returns the minimum number of points required to make an estimate. Technically it is 4 for general and 3 for planar. The minimum number of point cases seem to be a bit unstable in some situations. minimum + 1 or more is stable.
      Returns:
      minimum number of points