Class PoseFromPairLinear6
Estimates the camera motion using linear algebra given a set of N associated point observations and the depth (zcoordinate) of each object, where N ≥ 6. Note this is similar to, but not exactly the PnP problem.
Output from this class is a rotation and translation that converts a point from the first to second camera's reference frame:
X' = R*X+T
where R is a rotation matrix, T is a translation matrix, X is a coordinate in 1st reference frame, and X' in the second.
This approach is a modified version of the approach discussed in [1]. It is derived by using
bilinear and trilinear constraints, as is discussed in Section 8.3. It has been modified to remove
redundant rows and so that the computed rotation matrix is row major. The solution is derived from
the equations below and by computing the null space from the resulting matrix:
cross(x_{2})*(A*x_{1}) + cross(x_{2})*T/λ_{i}=0
where cross(x) is the cross product matrix of X, x_{i} is the pixel coordinate (normalized or not) in the
i^{th} image, A is rotation and T translation.
[1] Page 279 in "An Invitation to 3D Vision, From Images to Geometric Models" 1st Ed. 2004. Springer.

Constructor Summary

Method Summary
Modifier and TypeMethodDescriptionprotected DMatrixRMaj
getA()
Matrix used internally.P=[AT]boolean
process
(List<AssociatedPair> observations, List<Point3D_F64> locations) Computes the transformation between two camera frames using a linear equation.boolean
processHomogenous
(List<AssociatedPair> observations, List<Point4D_F64> locations) Computes the transformation between two camera frames using a linear equation.

Constructor Details

PoseFromPairLinear6
public PoseFromPairLinear6()


Method Details

process
Computes the transformation between two camera frames using a linear equation. Both the observed feature locations in each camera image and the depth (zcoordinate) of each feature must be known. Feature locations are in calibrated image coordinates. Parameters:
observations
 List of observations on the image plane in calibrated coordinates.locations
 List of object locations. One for each observation pair.

processHomogenous
Computes the transformation between two camera frames using a linear equation. Both the observed feature locations in each camera image and the depth (zcoordinate) of each feature must be known. Feature locations are in calibrated image coordinates. Parameters:
observations
 List of pixel or normalized image coordinate observationslocations
 List of object locations in homogenous coordinates. One for each observation pair.

getProjective
P=[AT] Returns:
 projective A

getA
Matrix used internally.
