# All Classes and Interfaces

Class

Description

Default implementations for all functions in

`AssociateDescription2D`

Implements all the functions but does nothing.

Provides access to a RGB color by index

Provides access to a 3D point by index

Provides access to the location of point tracks.

Provides information on point feature based SFM tracking algorithm

Given an estimate of image noise sigma, adaptive applies a mean filter dependent on local image statistics in
order to preserve edges, see [1].

Given an undistorted normalized pixel coordinate, compute the distorted normalized coordinate.

Given an undistorted normalized pixel coordinate, compute the distorted normalized coordinate.

Given an undistorted pixel coordinate, compute the distorted normalized image coordinate.

Given an undistorted pixel coordinate, compute the distorted normalized image coordinate.

Converts the undistorted normalized coordinate into normalized pixel coordinates.

Converts the undistorted normalized coordinate into normalized pixel coordinates.

The scale and sign of a homography matrix is ambiguous.

Types of adjustments that can be done to an undistorted image.

When a binary image is created some of the sides are shifted up to a pixel.

Converts an

`Affine2D_F64`

to and from an array
parameterized format.Displays a sequence of images.

A helpful class which allows a derivative of any order to be computed from an input image using a simple to use
interface.

Application which lists most of the demonstration application in a GUI and allows the user to double click
to launch one in a new JVM.

Abstract way to assign pixel values to

`ImageMultiBand`

without knowing the underlying data type.Abstract way to assign pixel values to

`ImageGray`

without knowing the underlying data type.
Common interface for associating features between two images.

Generalized interface for associating features.

Associates features from two images together using both 2D location and descriptor information.

Provides default implementations for all functions.

Provides default implementations for all functions.

Feature set aware association algorithm.

Feature set aware association algorithm for use when there is a large sparse set of unique set ID's.

Feature set aware association algorithm that takes in account image location.

Wrapper around

`AssociateDescription`

that allows it to be used inside of `AssociateDescription2D`

Indexes of two associated features and the fit score..

The observed location of a point feature in two camera views.

The observed location of a feature in two camera views.

The observed location of a conic feature in two camera views.

A track that's observed in two images at the same time.

Contains a set of three observations of the same point feature in three different views.

Indexes of three associated features and the fit score.

Visualizes associations between three views.

Interface for arbitrary number of matched 2D features

Associated set of

`Point2D_F64`

for an arbitrary number of views which can be changed.Associated set of

`Point2D_F64`

for an arbitrary number of views that is fixed.
Performs association by greedily assigning matches to the src list from the dst list if they minimize a score
function.

Base class for associating image features using descriptions and 2D distance cropping.

Greedily assigns two features to each other based on their scores while pruning features based on their
distance apart.

Brute force greedy association for objects described by a

`TupleDesc_F64`

.
Brute force greedy association for objects described by a

`TupleDesc_F64`

.Greedy association for matching descriptors

Computes the Euclidean distance squared between two points for association

Computes the distance between two points.

Two features can only be associated if their distance in image space is less than the specified number.

Matches features using a

`NearestNeighbor`

search from DDogleg.Parallel associate version of

`AssociateNearestNeighbor_ST`

.Matches features using a

`NearestNeighbor`

search from DDogleg.Association for a stereo pair where the source is the left camera and the destination is the right camera.

Associates features in three view with each other by associating each pair of images individually.

Common interface for associating features between three images.

If multiple associations are found for a single source and/or destination feature then this ambiguity is
removed by selecting the association with the best score.

Shows which two features are associated with each other.

Displays relative association scores for different features.

Image information for auto generated code

Operations related to down sampling image by computing the average within square regions.

Information on a detected Aztec Code

At what stage did it fail at?

Specifies which encoding is currently active in the data stream.

Which symbol structure is used

High level interface for reading Aztec Code fiducial markers from gray scale images

An Aztec Code detector which is designed to detect finder pattern corners to a high degree of precision.

Converts a raw binary stream read from the image and converts it into a String message.

Given location of a candidate finder patterns and the source image, decode the marker.

Encodes the data message into binary data.

Contains the data for a sequence of characters that are all encoded in the same mode

Automatic encoding algorithm as described in [1] which seeks to encode text in a format which minimizes the amount
of storage required.

Searches for Aztec finder patterns inside an image and returns a list of candidates.

Candidate locator patterns.

Generates an image of an Aztec Code marker as specified in ISO/IEC 24778:2008(E)

Contains functions for computing error correction code words and applying error correction to a message

Encodes and decodes binary data for mode message

Describes the locator pattern for an

`AztecCode`

Description of a layer in the pyramid

Performs background subtraction on an image using the very simple per-pixel "basic" model, as described in [1].

Background model in which each pixel is modeled as an independent Guassian distribution.

Background model in which each pixel is modeled as a Gaussian mixture model.

Common code for all implementations of

`BackgroundAlgorithmGmm`

.Base class for background subtraction/motion detection.

Base class for classifying pixels and background based on the apparent motion of pixels when the camera is moving.

Base class for classifying pixels as background based on the apparent motion of pixels when the camera is static.

Implementation of

`BackgroundAlgorithmBasic`

for moving images.Implementation of

`BackgroundMovingBasic`

for `Planar`

.BackgroundMovingBasic_IL_MT<T extends ImageInterleaved<T>,Motion extends InvertibleTransform<Motion>>

Implementation of

`BackgroundMovingBasic`

for `Planar`

.Implementation of

`BackgroundMovingBasic`

for `Planar`

.Implementation of

`BackgroundMovingBasic`

for `Planar`

.Implementation of

`BackgroundMovingBasic`

for `ImageGray`

.Implementation of

`BackgroundMovingBasic`

for `ImageGray`

.Implementation of

`BackgroundAlgorithmGaussian`

for moving images.BackgroundMovingGaussian_IL<T extends ImageInterleaved<T>,Motion extends InvertibleTransform<Motion>>

Implementation of

`BackgroundMovingGaussian`

for `ImageInterleaved`

.BackgroundMovingGaussian_IL_MT<T extends ImageInterleaved<T>,Motion extends InvertibleTransform<Motion>>

Implementation of

`BackgroundMovingGaussian`

for `ImageInterleaved`

.Implementation of

`BackgroundMovingGaussian`

for `Planar`

.Implementation of

`BackgroundMovingGaussian`

for `Planar`

.Implementation of

`BackgroundMovingGaussian`

for `ImageGray`

.Implementation of

`BackgroundMovingGaussian`

for `ImageGray`

.Implementation of

`BackgroundAlgorithmGmm`

for moving images.Implementation of

`BackgroundMovingGmm`

for `ImageGray`

.Implementation of

`BackgroundMovingGmm`

for `ImageGray`

.Implementation of

`BackgroundMovingGmm`

for `ImageGray`

.Implementation of

`BackgroundMovingGmm`

for `ImageGray`

.Implementation of

`BackgroundAlgorithmBasic`

for stationary images.Implementation of

`BackgroundStationaryBasic`

for `ImageGray`

.Implementation of

`BackgroundStationaryBasic`

for `ImageGray`

.Implementation of

`BackgroundStationaryBasic`

for `ImageGray`

.Implementation of

`BackgroundStationaryBasic`

for `ImageGray`

.Implementation of

`BackgroundStationaryBasic`

for `Planar`

.Implementation of

`BackgroundStationaryBasic`

for `Planar`

.Implementation of

`BackgroundAlgorithmGaussian`

for stationary images.Implementation of

`BackgroundStationaryGaussian`

for `ImageInterleaved`

.Implementation of

`BackgroundStationaryGaussian`

for `ImageInterleaved`

.Implementation of

`BackgroundStationaryGaussian`

for `Planar`

.Implementation of

`BackgroundStationaryGaussian`

for `Planar`

.Implementation of

`BackgroundMovingGaussian`

for `ImageGray`

.Implementation of

`BackgroundMovingGaussian`

for `ImageGray`

.Implementation of

`BackgroundAlgorithmGmm`

for stationary images.Implementation of

`BackgroundAlgorithmGmm`

for `ImageMultiBand`

.Implementation of

`BackgroundAlgorithmGmm`

for `ImageMultiBand`

.Implementation of

`BackgroundAlgorithmGmm`

for `ImageGray`

.Implementation of

`BackgroundAlgorithmGmm`

for `ImageGray`

.Base class for set aware feature association

Base class for set aware feature association

Common configuration for all

`BackgroundModel`

Common class for all polyline algorithms.

Base calss for dense HOG implementations.

Base class for square fiducial detectors.

Provides some basic functionality for implementing

`GeneralFeatureIntensity`

.Base class for ImageClassifiers which implements common elements

Control panel which shows you the image's size, how long it took to process,
and current zoom factor.

Base class for computing line integrals along lines/edges.

Simple interface for a GUI to tell the main processing that it needs to render the display
or reprocess that data.

A kernel can be used to approximate bicubic interpolation.

Performs bilinear interpolation to extract values between pixels in an image.

Performs bilinear interpolation to extract values between pixels in an image.

Performs bilinear interpolation to extract values between pixels in an image.

Performs bilinear interpolation to extract values between pixels in an image.

Performs bilinear interpolation to extract values between pixels in an image.

Describes the layout of a BRIEF descriptor.

Interface for finding contours around binary blobs.

Wrapper around

`LinearExternalContours`

Helper function that makes it easier to adjust the size of the binary image when working with a padded or unpadded
contour finding algorithm.

Common interface for binary contour finders

Many contour algorithms require that the binary image has an outside border of all zeros.

Detects ellipses inside gray scale images.

Detects ellipses inside a binary image by finding their contour and fitting an ellipse to the contour.

Class for binary filters

Contains a standard set of operations performed on binary images.

Interface for finding contours around binary blobs and labeling the image
at the same time.

Wrapper around

`LinearContourLabelChang2004`

for
`BinaryLabelContourFinder`

Applies binary thinning operators to the input image.

The start and length of segment inside a block of arrays

List of block matching approaches available for use in SGM.

Interface for computing disparity scores across an entire row

Computes the block disparity score using a

`CensusTransform`

.Block

`StereoMutualInformation`

implementation.Score using NCC.

Computes the Sum of Absolute Difference (SAD) for block matching based algorithms.

Interface for filters which blur the image.

Catch all class for function which "blur" an image, typically used to "reduce" the amount
of noise in the image.

Simplified interface for using a blur filter that requires storage.

Error thrown when BoofCV asserts fail

Central class for controlling concurrency in BoofCV.

Grab bag of different default values used throughout BoofCV.

Set of commonly used functions for Lambdas

Dynamically rendered BoofCV Logo

Miscellaneous functions which have no better place to go.

Loads a MJPEG wrapped inside a

`SimpleImageSequence`

.Functions to aid in unit testing code for correctly handling sub-images

Common arguments for verbose debug

Automatically generated file containing build version information.

Allows a

`VideoInterface`

to be created abstractly without directly referencing
the codec class.Remaps references to elements outside of an array to elements inside of the array.

Returns the closest point inside the image based on Manhattan distance.

Access to outside the array are reflected back into the array around the closest border.

Handles borders by wrapping around to the image's other side.

How the image border is handled by a convolution filter.

Override for blur image ops functions

Base class for override operations.

Override for

`ConvolveImage`

.Override for

`ConvolveImageMean`

Override for normalized convolutions

Override for

`FactoryBinaryContourFinder`

.+Location of override functions related to

`FactoryFeatureExtractor`

.Override functions which allows external code to be called instead of BoofCV for thresholding operations.

Provides functions for managing overrided functions

Distance types for BOW methods

Common image match data type for Bag-of-Words methods

Utility functions related to Bag-of-Words methods

`DogArray`

for `TupleDesc_B`

.
Dense optical flow which adheres to a brightness constancy assumption, a gradient constancy
assumption, and a discontinuity-preserving spatio-temporal smoothness constraint.

Implementation of

`BroxWarpingSpacial`

for `HornSchunck`

.Loads or plays a sequence of buffered images.

High level interface for bundle adjustment.

Generalized camera model for bundle adjustment.

Computes observations errors/residuals for metric bundle adjustment as implemented using

`UnconstrainedLeastSquares`

.Computes the Jacobian for bundle adjustment with a Schur implementation.

Computes the Jacobian for

`BundleAdjustmentSchur_DDRM`

using sparse matrices
in EJML.Computes the Jacobian for

`BundleAdjustmentSchur_DSCC`

using sparse matrices
in EJML.Operations related to Bundle Adjustment.

Converts any bundle adjustment camera into a

`CameraPinholeBrown`

.
Computes observations errors/residuals for projective bundle adjustment as implemented using

`UnconstrainedLeastSquares`

.Computes the Jacobian for

`BundleAdjustmentSchur`

for generic matrices.Computes the Jacobian for

`BundleAdjustmentSchur_DSCC`

using sparse matrices
in EJML.Computes the Jacobian for

`BundleAdjustmentSchur_DSCC`

using sparse matrices
in EJML.Implementation of bundle adjustment using Shur Complement and generic sparse matrices.

Implementation of

`BundleAdjustmentSchur`

for dense matricesImplementation of

`BundleAdjustmentSchur`

for sparse matricesComputes numerical jacobian from

`BundleAdjustmentCamera`

.Projective camera model.

Interface for an object which describes the camera's state.

Model that does nothing other than throw exceptions.

Implementation of

`CameraKannalaBrandt`

for bundle adjustmentFormulas for

`CameraPinhole`

.Formulas for

`CameraPinholeBrown`

.A pinhole camera with radial distortion that is fully described using three parameters.

Bundler and Bundle Adjustment in the Large use a different coordinate system.

Given parameters from bundle adjustment, compute all the parameters needed to compute a rectified stereo image
pair.

Implementation of

`CameraUniversalOmni`

for bundle adjustmentA simplified camera model that assumes the camera's zoom is known as part of the camera state

Camera state for storing the zoom value

Precomputes the output of sine/cosine operations.

Performs the full processing loop for calibrating a mono camera from a planar grid.

Multi camera calibration using multiple planar targets.

Prior information provided about the camera by the user

Calibration quality statistics

List of all observation from a camera in a frame.

Workspace for information related to a single frame.

Specifies which target was observed and what the inferred transform was..

Given a sequence of observations from a stereo camera compute the intrinsic calibration
of each camera and the extrinsic calibration between the two cameras.

Wrapper around

`DetectChessboardBinaryPattern`

for `DetectSingleFiducialCalibration`

Detector for chessboard calibration targets which searches for X-Corners.

Calibration implementation of circle hexagonal grid fiducial.

Calibration implementation of circle regular grid fiducial.

Implementation of

`DetectMultiFiducialCalibration`

for `ECoCheckDetector`

.Implementation of

`DetectSingleFiducialCalibration`

for square grid target types.Wrapper which allows a calibration target to be used like a fiducial for pose estimation.

Functions for loading and saving camera calibration related data structures from/to disk

Provides a graphical way to select the camera calibration model

List of observed features and their pixel locations on a single calibration target from one image.

Set of observed calibration targets in a single frame from a single camera

List of all the supported types of calibration fiducial patterns

Full implementation of the Zhang99 camera calibration algorithm using planar calibration targets.

Provides information on how good calibration images are and the calibration results can be trusted.

Used to specify the calibration target's parameters

Division model for lens distortion [1].

A camera model for pinhole, wide angle, and fisheye cameras.

Common class for camera models

List of all built in camera models

Intrinsic camera parameters for a pinhole camera.

Adds radial and tangential distortion to the intrinsic parameters of a

`pinhole camera`

.Computes the location of a point on the plane from an observation in pixels and the reverse.

Given a transform from a pixel to normalized image coordinates or spherical it will define an equirectangular
transform.

Camera model for omnidirectional single viewpoint sensors [1].

Implementation of canny edge detector.

Canny edge detector where the thresholds are computed dynamically based upon the magnitude of the largest edge

Value of a decoded cell inside of

`ECoCheckDetector`

.The Census Transform [1] computes a bit mask for each pixel in the image.

Different sampling patterns for

`CensusTransform`

.Corner in a chessboard.

From a set of

`ChessboardCorners`

find all the chessboard grids in view.Collection of edges that share the same relationship with the source vertex.

Describes the relationship between two vertexes in the graph.

Graph vertex for a corner.

Given a chessboard corner cluster find the grid which it matches.

Corner distance for use in

`NearestNeighbor`

searchesComputes edge intensity for the line between two corners.

A graph describing the inner corners in a chessboard patterns.

Helper which expands polygons prior to optimization.

Wrapper around

`CirculantTracker`

for `TrackerObjectQuad`

.
Tracker that uses the theory of Circulant matrices, Discrete Fourier Transform (DCF), and linear classifiers to track
a target and learn its changes in appearance [1].

Function for use when referencing the index in a circular list

Used create a histogram of actual to predicted classification.

Contains a classifier and where to download its models.

Scene classification which uses bag-of-word model and K-Nearest Neighbors.

Transforms an image in an attempt to not change the information contained inside of it for processing by
a classification algorithm that requires an image of fixed size.

Given a labeled image in which pixels that contains the same label may or may not be connected to each other,
create a new labeled image in which only connected pixels have the same label.

Finds clusters of

`TupleDesc_F64`

which can be used to identify frequent features, a.k.a words.Reading and writing data in the Bundle Adjustment in the Large format.

Encodes and decodes the values in a

`SceneStructureMetric`

using the following
parameterization:Encodes and decodes the values in a

`SceneStructureProjective`

using the following
parameterization:Stores the values of a 3-band color using floating point numbers.

Stores the values of a 3-band color using integers.

Methods for computing the difference (or error) between two colors in CIELAB color space

Specifies different color formats

Color conversion between RGB and HSV color spaces.

Given a set of 3D points and the image they are from, extract the RGB value.

Helper class which handles all the data structure manipulations for extracting RGB color values from a point
cloud computed by

`MultiViewStereoFromKnownSceneStructure`

.Conversion between RGB and CIE LAB color space.

3D point with a color associated with it

Stores an array of floats on constant size.

Contains functions related to working with RGB images and converting RGB images to gray-scale using a weighted
equation.

Color conversion between RGB and CIE XYZ models.

Color conversion between YUV and RGB, and YCbCr and RGB.

Wrapper around

`TrackerMeanShiftComaniciu2003`

for `TrackerObjectQuad`

Combines a sequence of files together using a simple format.

Compares two scores to see which is better

Compares two scores to see which is better

Panel for displaying two images next to each other separated by a border.

Algorithms for finding a 4x4 homography which can convert two camera matrices of the same view but differ in only
the projective ambiguity.

SIFT combined together to simultaneously detect and describe the key points it finds.

Wrapper around

`CompleteSift`

for `DetectDescribePoint`

.Concurrent implementation of

`CompleteSift`

.Update cluster assignments for

`TupleDesc_F32`

descriptors.Update cluster assignments for

`TupleDesc_F64`

descriptors.Concurrent implementation of

`ComputeMeanTuple_F32`

Concurrent implementation of

`ComputeMeanTuple_F64`

Concurrent implementation of

`ComputeMeanTuple_F64`

Update cluster assignments for

`TupleDesc_U8`

descriptors.Update cluster assignments for

`TupleDesc_B`

descriptors.Concurrent implementation of

`ComputeMedianTuple_B`

Computes the acute angle between two observations.

Computes different variants of Otsu.

Computes the mean color for regions in a segmented image.

Implementation for

`GrayF32`

Implementation for

`Planar`

Implementation for

`Planar`

Implementation for

`GrayU8`

Configuration for associating using descriptors only

Configuration for

`AssociateGreedyDesc`

.Configuration for

`AssociateNearestNeighbor_ST`

.Configuration for

`ImplOrientationAverageGradientIntegral`

.Configuration for

`AztecCodePreciseDetector`

Configuration for

`ConfigBackgroundBasic`

.Configuration for

`ConfigBackgroundGaussian`

.Configuration for

`ConfigBackgroundGmm`

.Configuration for BRIEF descriptor.

Configuration for

`HornSchunckPyramid`

Configuration for

`BundleAdjustment`

Configuration for

`MetricBundleAdjustmentUtils`

Describes the calibration target.

Calibration parameters for chessboard style calibration grid.

Calibration parameters for chessboard style calibration grid.

Calibration parameters for an hexagonal grid of circle calibration target.

Calibration parameters for a regular grid of circle calibration target.

Configuration for

`CirculantTracker`

.Configuration for

`Comaniciu2003_to_TrackerObjectQuad`

.Configuration for

`CompleteSift`

.Generic configuration for optimization routines

Configuration that specifies how a

`TupleDesc`

should be converted into one of
a different data structureArray data type for output tuple

Configuration for

`ImageDeformPointMLS_F32`

Configuration for

`FactoryDescribeImageDense`

Configuration for dense SIFT features

Configuration for Dense SURF features optimized for Speed

Configuration for Dense SURF features optimized for stability

Configuration for creating

`DescribePoint`

Configuration for creating

`DescribePointRadiusAngle`

Configuration for creating

`DetectDescribePoint`

implementations.Configuration for detecting any built in interest point.

Specifies number of layers in the pyramid.

Generic configuration for any dense stereo disparity algorithm.

List of avaliable approaches

Configuration for the basic block matching stereo algorithm that employs a greedy winner takes all strategy.

A block matching algorithm which improved performance along edges by finding the score for 9 regions but only
selecting the 5 best.

Configurations for different types of disparity error metrics

Configuration for Census

Configuration for Hierarchical Mutual Information.

Normalized cross correlation error

Configuration for

`Semi Global Matching`

Allowed number of paths

Configuration for detecting ECoCheck markers.

Specifies the grid shape and physical sizes for one or more

`ConfigECoCheckDetector`

type markers.Configuration for computing a binary image from a thresholded gradient.

Configuration for

`BinaryEllipseDetector`

for use in `FactoryShapeDetector`

Parameters for

`EdgeIntensityEllipse`

Configuration for implementations of

`EpipolarScore3D`

.Configuration for

`ScoreFundamentalHomographyCompatibility`

Configuration for

`ScoreFundamentalVsRotation`

Configuration for

`ScoreRatioFundamentalHomography`

Specifies which algorithm to use

Configuration parameters for estimating an essential matrix robustly.

General configuration for

`NonMaxSuppression`

.Configuration for FAST feature detector.

Configuration for

`FastHessianFeatureDetector`

plus feature extractor.Generic configuration for using implementations of

`FeatureSceneRecognition`

inside
of `SceneRecognition`

.Which type of recognition algorithm to use

Configuration for

`SegmentFelzenszwalbHuttenlocher04`

.Configuration for

`SquareBinary_to_FiducialDetector`

.Configuration that describes how to detect a Hamming marker.

Configuration for

`SquareImage_to_FiducialDetector`

.Configuration parameters for estimating a fundamental matrix robustly.

Configuration for

`GeneralFeatureDetector`

.Configuration for

`GeneratePairwiseImageGraph`

.Configuration for

`GenerateStereoPairGraphFromScene`

.Generates configuration.

Implementation of

`ConfigGenerator`

that samples the configuration space using a grid pattern.Base class for searches which follow a repetable pattern

Implementation of

`ConfigGenerator`

that randomly samples each parameter using a uniform distributionImplementation of

`ConfigGenerator`

that samples the configuration space using along each degree of
freedom (a parameter) independently.Generic class that specifies the physical dimensions of a grid.

Configuration for uniformly sampling points inside an image using a grid.

Defines the calibration pattern based on

`hamming checkerboard fiducials`

where square
markers are embedded inside a chessboard/checkerboard pattern.Defines the calibration pattern based on

`hamming square fiducials`

where each square
is a marker that can be uniquely identified.Defines the dictionary and how they are encoded in a Hamming distance marker.

Defines a marker

Configuration for

`Harris`

corner.Configuration for

`HierarchicalVocabularyTree`

.Configuration parameters for estimating a homography

Configuration for

`HornSchunck`

Configuration for

`HornSchunckPyramid`

Configuration for

`HoughTransformBinary`

Approach used to compute a binary image

Configuration for

`DetectLineHoughFootSubimage`

.Configuration for

`HoughTransformGradient`

Configuration for implementations of

`VisOdomKeyFrameManager`

Configuration for

`KltTracker`

Specifies a length as a fixed length or relative to the total size of some other object.

Configuration for

`DetectLineSegmentsGridRansac`

.Configuration for Locally Likely Arrangement Hashing (LLAH).

Standard configuration parameters for

`LeastMedianOfSquares`

Configuration for performing a mean-shift search for a local peak

Configuration for

`MicroQrCodePreciseDetector`

Configuration for

`MultiViewStereoFromKnownSceneStructure`

.Configuration for

`DenseOpticalFlowBlockPyramid`

Base configuration for orientation estimation.

Orientation estimation which takes in the image gradient

Orientation estimation which takes in an integral image

Configuration for region orientations

parameters for

`HoughParametersFootOfNorm`

parameters for

`HoughParametersPolar`

Projective to metric self calibration algorithm configuration which lets you select multiple approaches.

Configuration class for

`PyramidKltTracker`

.Configuration for visual odometry by assuming a flat plane using PnP style approach.

Configuration parameters for solving the PnP problem

Configuration for all single point features, e.g.

Configuration for creating implementations of

`PointTracker`

Configuration for

`DetectPolygonFromContour`

for use in `FactoryShapeDetector`

.Configuration for

`DetectPolygonFromContour`

Configuration for

`PolylineSplitMerge`

Configuration for

`ProjectiveReconstructionFromPairwiseGraph`

Configuration for

`QrCodePreciseDetector`

Standard configuration for

`RANSAC`

.Configuration for

`FeatureSceneRecognitionNearestNeighbor`

.Configuration for recognition algorithms based on

`RecognitionVocabularyTreeNister2006`

Configuration parameters for

`RefinePolygonToGrayLine`

Configuration for visual odometry from RGB-D image using PnP style approach.

Configuration for

`SegmentMeanShift`

Configuration for

`SelectFramesForReconstruction3D`

Configuration for

`FeatureSelectLimitIntensity`

Configuration for

`SelfCalibrationLinearDualQuadratic`

.Configuration for

`SelfCalibrationEssentialGuessAndCheck`

.Configuration for

`SelfCalibrationPraticalGuessAndCheckFocus`

Contains configuration parameters for

`SparseFlowObjectTracker`

.Configuration for

`Shi-Tomasi`

corner.Configuration for

`DescribePointSift`

Configuration for

`SiftDetector`

Configuration for

`OrientationHistogramSift`

Configuration for

`SiftScaleSpace`

Configuration for

`SimilarImagesSceneRecognition`

Configuration for

`SimilarImagesSceneRecognition`

Configuration used when creating

`SegmentSlic`

via
`FactoryImageSegmentation`

.Configuration for

`ImplOrientationSlidingWindowIntegral`

.Configuration for

`SparseSceneToDenseCloud`

Configuration for

`DisparitySmootherSpeckleFilter`

.Deprecated.

Calibration parameters for square-grid style calibration grid.

Configuration for

`VisOdomDualTrackPnP`

Configuration for

`WrapVisOdomMonoStereoDepthPnP`

.Configuration for

`WrapVisOdomDualTrackPnP`

.Abstract base class for SURF implementations.

Configuration for SURF implementation that has been designed for speed at the cost of some
stability.

Configuration for SURF implementation that has been designed for stability.

Template based image descriptor.

Configuration for

Configuration for all threshold types.

Configuration for

`ThresholdBlockMinMax`

Configuration for all threshold types.

Configuration file for TLD tracker.

Configuration for

`DetectDescribeAssociateTracker`

Configuration for

`PointTrackerHybrid`

.Configuration for

`TldTracker`

as wrapped inside of `Tld_to_TrackerObjectQuad`

.Configuration for triangulation methods.

Configuration for estimating

`TrifocalTensor`

Configuration for trifocal error computation

Configuration for Uchiya Marker approach to detect random dot markers

Complex algorithms with several parameters can specify their parameters using a separate class.

Implementers of this interface can be configured using data from an

`InputStream`

Base class for visual odometry algorithms based on

`PointTracker`

.Configuration for

`WatershedVincentSoille1991`

Storage for a confusion matrix.

Visualizes a confusion matrix.

Contains information on what was at the point

Naive implementation of connected-component based speckle filler.

Naive implementation of connected-component based speckle filler.

Connected component based speckle filler

Searches for small clusters (or blobs) of connected pixels and then fills them in with the specified fill color.

Implementation of

`ConnectedTwoRowSpeckleFiller`

for `GrayU8`

.Implementation of

`ConnectedTwoRowSpeckleFiller`

for `GrayU8`

.
Given a grid of detected line segments connect line segments together if they appear to be
apart of the same line.

List of connectivity rules.

Internal and externals contours for a binary blob.

Computes the average value of points sampled outside and inside the contour at regular intervals.

Operations related to contours

Internal and externals contours for a binary blob with the actual points stored in a

`PackedSetsPoint2D_I32`

.Used to trace the external and internal contours around objects for

`LinearContourLabelChang2004`

.Base implementation for different tracer connectivity rules.

Control Panel for

`ConfigAssociateGreedy`

.Control panel for

`ConfigAssociateNearestNeighbor`

.Provides full control over all of Detect-Describe-Associate using combo boxes that when selected will change
the full set of controls shown below.

Control panel for creating Detect-Describe-Associate style trackers

Control panel for

`ConfigBrief`

Control panel for

`ConfigSiftDescribe`

Control panel for

`ConfigTemplateDescribe`

Contains controls for all the usual detectors, descriptors, and associations.

Controls for configuring disparity algorithms

Controls GUI and settings for disparity calculation

What's being shown to the user

Controls for configuring sparse disparity algorithms

GUI control panel for

`ConfigExtract`

Control panel for

`ConfigFastCorner`

Controls for configuring

`ConfigFastHessian`

.Control panel for

`ConfigGeneralDetector`

.Control panel for creating Detect-Describe-Associate style trackers

Panel for configuring Brown camera model parameters

Control panel for adjusting how point clouds are visualzied

Control for detecting corners and dots/blobs.

Configuration for Point KLT Tracker

Control panel for selecting any

`PointTracker`

Control panel for SIFT.

Control panel for

`ConfigSiftScaleSpace`

Control panel for

`ConfigStereoDualTrackPnP`

.Control panel for

`ConfigStereoMonoTrackPnP`

Control Panel for

`ConfigStereoQuadPnP`

Control panel for

`ConfigSurfDescribe`

Controls for

`ConfigVisOdomTrackPnP`

Functions for converting to and from

`BufferedImage`

.Converts images that are stored in

`ByteBuffer`

into BoofCV image types and performs
a local copy when the raw array can't be accessedConverts between different types of descriptions

Functions for converting between different image types.

Use the filter interface to convert the image type using

`GConvertImage`

.Functions for converting image formats that don't cleanly fit into any other location

Low level implementations of different methods for converting

`ImageInterleaved`

into
`ImageGray`

.Low level implementations of different methods for converting

`ImageInterleaved`

into
`ImageGray`

.Functions for converting between JavaCV's IplImage data type and BoofCV image types

Functions for converting between different labeled image formats.

Converts

`FeatureSelectLimit`

into `FeatureSelectLimitIntensity`

.Used to convert NV21 image format used in Android into BoofCV standard image types.

Converts OpenCV's image format into BoofCV's format.

Routines for converting to and from

`BufferedImage`

that use its internal
raster for better performance.Convert between different types of

`TupleDesc`

.Convert a

`TupleDesc`

from double to float.Converts two types of region descriptors.

Converts two types of region descriptors.

Does not modify the tuple and simply copies it

Functions for converting YUV 420 888 into BoofCV imgae types.

Packed format with Â½ horizontal chroma resolution, also known as YUV 4:2:2

YUV / YCbCr image format.

Generalized interface for filtering images with convolution kernels while skipping pixels.

Standard implementation of

`ConvolveImageDownNoBorder`

where no special
optimization has been done.
Unrolls the convolution kernel to improve runtime performance by reducing array accesses.

Unrolls the convolution kernel to improve runtime performance by reducing array accesses.

Unrolls the convolution kernel to improve runtime performance by reducing array accesses.

Unrolls the convolution kernel to improve runtime performance by reducing array accesses.

Unrolls the convolution kernel to improve runtime performance by reducing array accesses.

Convolves a 1D kernel in the horizontal or vertical direction while skipping pixels across an image's border.

Down convolution with kernel renormalization around image borders.

Convolves a kernel across an image and handles the image border using the specified method.

Convolves a kernel which is composed entirely of 1's across an image.

Specialized convolution where the center of the convolution skips over a constant number
of pixels in the x and/or y axis.

Convolves a mean filter across the image.

Provides functions for convolving 1D and 2D kernels across an image, excluding the image border.

Performs a convolution around a single pixel only.

Convolves a kernel across an image and scales the kernel such that the sum of the portion inside
the image sums up to one.

Performs a convolution around a single pixel only using two 1D kernels in the horizontal and vertical direction.

Implementations of sparse convolve using image border.

Standard algorithms with no fancy optimization for convolving 1D and 2D kernels across an image.

Standard algorithms with no fancy optimization for convolving 1D and 2D kernels across an image.

Standard algorithms with no fancy optimization for convolving 1D and 2D kernels across an image.

Standard algorithms with no fancy optimization for convolving 1D and 2D kernels across an image.

General implementation of

`ConvolveImageNoBorderSparse`

.
Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Unrolls the convolution kernel to reduce array accessing and save often used variables to the stack.

Generic interface for performing image convolutions.

Convolves just the image's border.

Convolves just the image's border.

Covolves a 1D kernel in the horizontal or vertical direction across an image's border only, while re-normalizing the
kernel sum to one.

Convolution with kernel renormalization around image borders.

Convolution with kernel renormalization around image borders.

Straight forward implementation of

`ConvolveImageNormalizedSparse`

with minimal
optimizations.Creates a point cloud from multiple disparity images.

Converts a camera image into an overhead orthogonal view with known metric properties given a known transform from the
plane to camera.

Implementation of

`CreateSyntheticOverheadView`

for `Planar`

.Implementation of

`CreateSyntheticOverheadView`

for `ImageGray`

.Renders a cylindrical view from an equirectangular image.

Renders a cylindrical view from an equirectangular image.

Functions for manipulating data by transforming it or converting its format.

Decomposes the absolute quadratic to extract the rectifying homogrpahy H.

Decomposed the essential matrix into a rigid body motion; rotation and translation.

Decomposes a homography matrix to extract its internal geometric structure.

Decomposes metric camera matrices as well as projective with known intrinsic parameters.

The default media manager used by BoofCV.

Denoises images using an adaptive soft-threshold in each sub-band computed using Bayesian statistics.

SureShrink denoises wavelets using a threshold computed by minimizing Stein's Unbiased Risk
Estimate (SURE).

Classic algorithm for wavelet noise reduction by shrinkage with a universal threshold.

Interface for algorithms which "denoise" the wavelet transform of an image.

Base class for pyramidal dense flow algorithms based on IPOL papers.

High level interface for computing the dense optical flow across the whole image.

Computes dense optical flow optical using pyramidal approach with square regions and a locally exhaustive search.

Implementation for

`GrayF32`

Implementation for

`GrayU8`

Computes the dense optical flow using

`KltTracker`

.Specifies how the image should be sampled when computing dense features

Samples disparity image in a regular grid pattern.

Computes the 3D coordinate a point in a visual camera given a depth image.

Wrapper around

`DepthSparse3D`

for `ImagePixelTo3D`

.Implementation for

`GrayF32`

.Implementation for

`GrayI`

.
Visual odometry that estimate the camera's ego-motion in Euclidean space using a camera image and
a depth image.

Functions for image derivatives.

Functions related to image derivatives in integral images.

The Laplacian is convolved across an image to find second derivative of the image.

Laplacian which processes the inner image only

Laplacian which processes the inner image only

Different ways to reduce a gradient

List of standard kernels used to compute the gradient of an image.

Wrapper around

`DescribePointBrief`

for `DescribePointRadiusAngle`

.Wrapper around

`DescribePointBriefSO`

for `DescribePointRadiusAngle`

Implementation of the Histogram of Oriented Gradients (HOG) [1] dense feature descriptor.

A variant on the original Histogram of Oriented Gradients (HOG) [1] in which spatial Gaussian weighting
has been omitted, allowing for cell histograms to be computed only once.

Computes

`SIFT`

features in a regular grid across an entire image at a single
scale and orientation.Computes feature descriptors across the whole image.

Wrapper that converts an input image data type into a different one

Implementation of

`DescribeImageDense`

for `DescribeDenseHogFastAlg`

.High level wrapper around

`DescribeDenseSiftAlg`

for `DescribeImageDense`

Wrapper around

`DescribePointPixelRegionNCC`

for
`DescribePointRadiusAngle`

.Computes a feature description from

`Planar`

images by computing a descriptor separately in each band.High level interface for describing the region around a point when given the pixel coordinate of the point
only.

Default implementations for all functions in

`DescribePoint`

.
For each bit in the descriptor it samples two points inside an image patch and compares their values.

BRIEF: Binary Robust Independent Elementary Features.

Extension of

`DescribePointBrief`

which adds invariance to orientation and scale.DescribePointConvertTuple<T extends ImageGray<T>,In extends TupleDesc<In>,Out extends TupleDesc<Out>>

Converts the region descriptor type from the

`DescribePointRadiusAngle`

into the desired output using a
`ConvertTupleDesc`

.Describes a rectangular region using its raw pixel intensities which have been normalized for intensity.

High level interface for describing the region around a point when given the pixel coordinate of the point,
the region's radius, and the regions orientation.

Implements

`DescribePointRadiusAngle`

but does nothing.DescribePointRadiusAngleConvertImage<In extends ImageBase<In>,Mod extends ImageBase<Mod>,Desc extends TupleDesc<Desc>>

Used to automatically convert the input image type to use that's usable.

DescribePointRadiusAngleConvertTuple<T extends ImageGray<T>,In extends TupleDesc<In>,Out extends TupleDesc<Out>>

`DescribePointRadiusAngle`

into the desired output using a
`ConvertTupleDesc`

.Describes a rectangular region using its raw pixel intensities.

Wrapper around

`DescribePointRawPixels`

for `DescribePointRadiusAngle`

.Base class for describing a rectangular region using pixels.

A faithful implementation of the SIFT descriptor.

Implementation of the SURF feature descriptor, see [1].

Modified SURF descriptor which attempts to smooth out edge conditions.

Computes a color SURF descriptor from a

`Planar`

image.Convert

`DescribePointRadiusAngle`

into `DescribePoint`

.Allows you to use SIFT features independent of the SIFT detector.

Base class for

`SIFT`

descriptors.Wrapper around SURF for

`DescribePoint`

.Wrapper around

`DescribePointSurf`

for `DescribePointRadiusAngle`

Wrapper around

`DescribePointSurfPlanar`

for `DescribePointRadiusAngle`

Series of simple functions for computing difference distance measures between two descriptors.

Provides information about the feature's descriptor.

Detects calibration points inside a chessboard calibration target.

Chessboard corner detector that's designed to be robust and fast.

Detects chessboard corners at multiple scales.

Given a binary image it detects the presence of chess board calibration grids.

Chessboard detector that uses X-Corners and finds all valid chessboard patterns inside the image.

Base class for grid based circle fiducials.

Detects a hexagonal circle grid.

Detects regular grids of circles, see below.

Base class for detect-describe-associate type trackers.

DetectDescribeConvertTuple<Image extends ImageBase<Image>,In extends TupleDesc<In>,Out extends TupleDesc<Out>>

Used to convert the TupleDesc type.

Wrapper class around independent feature detectors, region orientation, and descriptors, that allow
them to be used as a single integrated unit.

Deprecated.

Interface for detecting and describing point features.

Abstract class with default implementations of functions.

Deprecated.

Computes a color SURF descriptor from a

`Planar`

image.Multi-threaded version of

`DetectDescribeSurfPlanar`

Detects lines using image gradient.

Wrapper around

`DetectEdgeLines`

that allows it to be used by `DetectLine`

interface
Square fiducial that encodes numerical values in a binary N by N grids, where N ≥ 3.

A fiducial composed of

`BaseDetectFiducialSquare`

intended for use in calibration.A detected inner fiducial.

This detector decodes binary square fiducials where markers are indentified from a set of markers which is much
smaller than the number of possible numbers in the grid.

Fiducial which uses images to describe arbitrary binary patterns.

description of an image in 4 different orientations

Interface for detecting lines inside images.

Detects lines inside the image by breaking it up into subimages for improved precision.

Interface for detecting

`line segments`

inside images.Wrapper around

`GridRansacLineDetector`

for `DetectLineSegment`

Calibration targets which can detect multiple targets at once with unique IDs

Detects polygons using contour of blobs in a binary image.

Detects convex polygons with the specified number of sides in an image.

Interface for extracting points from a planar calibration grid.

Detect a square grid calibration target and returns the corner points of each square.

High level interface for applying the forward and inverse Discrete Fourier Transform to an image.

Various functions related to

`DiscreteFourierTransform`

.Displays the entire image pyramid in a single panel.

Functions related to discretized circles for image processing

Computes the disparity SAD score efficiently for a single rectangular region while minimizing CPU cache misses.

Scores the disparity for a point using multiple rectangular regions in an effort to reduce errors at object borders,
based off te 5 region algorithm described in [1].

DisparityBlockMatchCorrelation<T extends ImageGray<T>,D extends ImageGray<D>,TF extends ImageGray<TF>>

Wrapper around

`StereoDisparity`

that will (optionally) convert all inputs to float and normalize the input to have
zero mean and an absolute value of at most 1.
Base class for all dense stereo disparity score algorithms whose score's can be processed by

`DisparitySelect`

.Different disparity error functions that are available.

Describes the geometric meaning of values in a disparity image.

Implementation of

`DisparityBlockMatch`

for processing
input images of type `GrayF32`

.
Implementation of

`DisparityBlockMatch`

for processing
input images of type `GrayU8`

.
Implementation of

`DisparityBlockMatchBestFive`

for processing
images of type `GrayF32`

.
Implementation of

`DisparityBlockMatchBestFive`

for processing
images of type `GrayU8`

.
Selects the best disparity given the set of scores calculated by

`DisparityBlockMatch`

.Different types of error which can be applied to SGM

High level API for algorithm which attempt to reduce the noise in disparity images in a post processing step

Wrapper around

`ConnectedTwoRowSpeckleFiller_F32`

for `DisparitySmoother`

Base class for computing sparse stereo disparity scores using a block matching approach given a
rectified stereo pair.

Implementation of

`DisparitySparseRectifiedScoreBM`

that processes images of type `GrayF32`

.
Implementation of

`DisparitySparseRectifiedScoreBM`

that processes integer typed images.Computes the disparity given disparity score calculations provided by

`DisparitySparseRectifiedScoreBM`

.
Renders a 3D point cloud using a perspective pin hole camera model.

Interface for accessing RGB values inside an image

Panel for displaying results from camera calibration.

Used to display a calibrated fisheye camera.

Displays information images of planar calibration targets.

Applies an affine transformation to the associated pair and computes the euclidean distance
between their locations.

Applies an affine transformation to the associated pair and computes the euclidean distance
squared between their locations.

Computes error using the epipolar constraint.

Computes error using the epipolar constraint when given observations as pointing vectors.

Wrapper around

`DistanceFromModel`

that allows it ot be used by `DistanceFromModelViews`

.Computes the observation errors in pixels when the input is in normalized image coordinates.

Computes the observation errors in pixels when the input is in point vector coordinates.

Computes the error using

`ModelObservationResidual`

for `DistanceFromModel`

.Computes the error using

`ModelObservationResidual`

for `DistanceFromModel`

.Computes the observation errors in pixels when the input is in normalized image coordinates.

Computes geometric error in an uncalibrated stereo pair.

Computes the Euclidean error squared between 'p1' and 'p2' after projecting 'p1' into image 2.

Computes the Euclidean error squared between 'p1' and 'p2' after projecting 'p1' into image 2.

Image based reprojection error using error in view 2 and view 3.

Wrapper around

`EssentialResidualSampson`

for `DistanceFromModelMultiView`

/Computes the difference between a predicted observation and the actual observation.

Computes distance squared between p1 after applying the

`ScaleTranslate2D`

motion model and p2.Computes distance squared between p1 after applying the

`ScaleTranslateRotate2D`

motion model and p2.Computes the Euclidean error squared between the predicted and actual location of 2D features after applying
a

`Se2_F64`

transform.
Computes the error for a given camera motion from two calibrated views.

Computes the error for a given camera motion from two calibrated views.

Distance given a known rotation.

Estimates the accuracy of a trifocal tensor using reprojection error.

Computes transfer error squared from views 1 + 2 to 3 and 1 + 3 to 2.

Provides common function for distorting images.

Provides low level functions that

`FDistort`

can call.A transform which applies no transform.

A transform which applies no transform.

Pixel transform which sets the output to be exactly the same as the input

Pixel transform which sets the output to be exactly the same as the input

This video interface attempts to load a native reader.

Implementation of

`WebcamInterface`

that sees which of the known libraries are available and uses
the best ones.Detects features using

`GeneralFeatureDetector`

but Handles all the derivative computations automatically.Information on different character encodings and ECI character sets

Wrapper around

`ECoCheckDetector`

for `FiducialDetector`

.Encodes and decodes the marker ID and cell ID from an encoded cell in a calibration target.

Detects chessboard patterns with marker and grid coordinate information encoded inside of the inner white squares.

Storage for a detected Error COrrecting Checkerboard (ECoCheck) marker found inside an image.

Renders an Error COrrecting Checkerboard (ECoCheck) marker to an image.

Defines bit sampling pattern for different sized grids.

Common functions that are needed for encoding, decoding, and detecting.

Data structure containing the contour along an edge.

Computes the edge intensity along the an ellipse.

Looks at the difference in pixel values along the edge of a polygon and decides if its a false positive or not.

Describes an edge pixel found inside a region.

A list of connected points along an edge.

Information about a view when elevating it from projective to metric.

Base class for ordering clusters of ellipses into grids

Specifies the grid.

Given a cluster of ellipses (created with

`EllipsesIntoClusters`

) order the ellipses into
a hexagonal grid pattern.Given a cluster of ellipses (created with

`EllipsesIntoClusters`

) order the ellipses into an regular
grid.Given an unordered list of ellipses found in the image connect them into clusters.

Describes the appearance of an encoded cell

Applies geometric constraints to an estimated trifocal tensor.

Ensures that the source and/or destination features are uniquely associated by resolving ambiguity using
association score and preferring matches with better scores.

Implementation of

`EnforceUniqueByScore`

for `AssociateDescription`

.Implementation of

`EnforceUniqueByScore`

for `AssociateDescription2D`

.
Operations for improving the visibility of images.

Removes any ambiguous associations.

List of different algorithms for estimating Essential matrices

List of different algorithms for estimating Fundamental or Essential matrices

List of algorithms for solving the Perspective n-Point (PnP) problem

List of algorithms for estimating

`TrifocalTensor`

.Used to specify the type of error function used when optimizing multiview geometric functions

Given point correspondences x[1] and x[2] and a fundamental matrix F, compute the
correspondences x'[1] and x'[2] which minimize the geometric error
subject to x'[2] F' x'[1] = 0

Evaluates how 3D a pair of views are from their associated points

Base class for all distortions from an equirectangular image.

Base class for all distortions from an equirectangular image.

Transforms the equirectangular image as if the input image was taken by the camera at the same location but with
a rotation.

Contains common operations for handling coordinates in an equirectangular image.

Contains common operations for handling coordinates in an equirectangular image.

Finds the essential matrix given exactly 5 corresponding points.

Computes the Sampson distance residual for a set of observations given an esesntial matrix.

Marker interface for estimating a single fundamental, essential, or homography matrix given a set of
associated pairs.

Marker interface for estimating essential matrix or other 3x3 matrices from observations provided as
3D pointing vectors.

Marker interface for computing one solution to the Perspective N-Point (PnP) problem.

Interface for computing multiple solution to the Projective N-Point (PrNP) problem.

Marker interface for computing a single

`TrifocalTensor`

given a set of `AssociatedTriple`

observations.Implementation of

`GeoModelEstimator1toN`

for epipolar matrices.Implementation of

`GeoModelEstimator1toN`

for PnP.
Marker interface for estimating several fundamental, essential, or homography matrices given a set of
associated pairs.

Marker interface for estimating essential matrix or other 3x3 matrices from observations provided as
3D pointing vectors

Interface for computing multiple solution to the Perspective N-Point (PnP) problem.

Interface for computing multiple solution to the Projective N-Point (PrNP) problem.

Implementation of

`GeoModelEstimatorNto1`

for epipolar matrices given observations in 2D, i.e.Implementation of

`GeoModelEstimatorNto1`

for epipolar matrices given observations as pointing vectors.Implementation of

`GeoModelEstimatorNto1`

for PnP problem.If the camera calibration is known for two views then given canonical camera projection matrices (P1 = [I|0])
it is possible to compute the plane a infinity and from that elevate the views from projective to metric.

Expands to a new view using know camera intrinsics for all views.

Target camera is unknown.

Common operations when estimating a new view

Wrapper class for converting

`GeoModelEstimator1`

into `ModelGenerator`

.Evaluates the quality of a reconstruction based on various factors.

Common parent for metric and projective expand scene by one.

Creates algorithms for associating

`TupleDesc_F64`

features.Factory for creating implementations of

`BackgroundModelStationary`

and `BackgroundModelMoving`

Creates instances of

`BinaryLabelContourFinder`

`FilterImageInterface`

wrappers around functions inside of `BinaryImageOps`

.Factory for creating different blur image filters.

Factory for creating BorderIndex1D.

Creates different brief descriptors.

Factory for creating different types of census transforms

Factory for creating different types of

`ConvertTupleDesc`

, which are used for converting image region
descriptors.Factory for

`ConvolveInterface`

.Factory class for creating abstracted convolve down filters.

Factory for creating sparse convolutions.

Factory for creating wavelet based image denoising classes.

Creates implementations of

`DenseOpticalFlow`

.
Factory for creating different types of

`ImageGradient`

, which are used to compute
the image's derivative.Creates filters for performing sparse derivative calculations.

Creates algorithms for describing point features.

Factory for creating

`DescribeImageDense`

.Returns low level implementations of dense image descriptor algorithms.

Factory for creating implementations of

`DescribePointRadiusAngle`

.Factory for creating implementations of

`DescribePointRadiusAngle`

.Creates instances of

`DetectDescribePoint`

for different feature detectors/describers.Factory for specific implementations of Detect and Describe feature algorithms.

Factory for creating high level implementations of

`DetectLine`

and `DetectLineSegment`

.Factory for creating line and line segment detectors.

Creates instances of

`GeneralFeatureDetector`

, which detects the location of
point features inside an image.Factory for operations which distort the image.

Creates different types of edge detectors.

Creates

`NonMaxSuppression`

for finding local maximums in feature intensity images.Factory for creating fiducial detectors which implement

`FiducialDetector`

.Creates detectors of calibration targets.

Random filters for lambas.

Factory for creating low level non-abstracted algorithms related to geometric vision

Factory for creating generalized images

Creates

`GImageMultiBand`

for different image types.Used to create new images from its type alone

Contains functions that create classes which handle pixels outside the image border differently.

Factory for creating data type specific implementations of

`ImageBorder1D`

.Factory for creating image classifiers.

Provides and easy to use interface for removing noise from images.

Factory for creating common image types

Factory for

`ImageSuperpixels`

algorithms, which are used to segment the image into super pixels.Provides intensity feature intensity algorithms which conform to the

`GeneralFeatureIntensity`

interface.Factory for creating various types of interest point intensity algorithms.

Factory for creating interest point detectors which conform to the

`InterestPointDetector`

interfaceFactory for non-generic specific implementations of interest point detection algorithms.

Simplified interface for creating interpolation classes.

Factory used to create standard convolution kernels for floating point and
integer images.

Factory for creating Gaussian kernels for convolution.

Factory for creating algorithms related to 2D image motion.

Factory for creating abstracted algorithms related to multi-view geometry

Factory for creating robust false-positive tolerant estimation algorithms in multi-view geometry.

Factory for creating implementations of

`RegionOrientation`

that are used to estimate
the orientation of a local pixel region..Creates specific implementations of local region orientation estimators.

Factory for creating instances of

`PointsToPolyline`

Factory for creating trackers which implement

`PointTracker`

.Factory for creating classes related to image pyramids.

Factory for creating

`SceneRecognition`

and related.Factory for operations related to scene reconstruction

Factory for implementations of

`SearchLocalPeak`

Factory for low level segmentation algorithms.

Factory that creates

`FeatureSelectLimitIntensity`

Factory for creating classes which don't go anywhere else.

Factory for detecting higher level shapes

Creates various filters for

`integral images`

.
Creates different steerable kernels.

Coefficients for common steerable bases.

Creates high level interfaces for computing the disparity between two rectified stereo images.

Algorithms related to computing the disparity between two rectified stereo images.

Factory for creating

`StitchingTransform`

of different motion models.Factory for creating template matching algorithms.

Factory for creating various filters which convert an input image into a binary one

Factory for creating feature trackers algorithms.

Factory for creating low level implementations of object tracking algorithms.

Factory for implementations of

`TrackerObjectQuad`

, a high level interface for tracking user specified
objects inside video sequences.Factory for creating classes related to clustering of

`TupleDesc`

data structuresFactory for creating

`TupleDesc`

and related structures abstractly.Factory for creating visual odometry algorithms.

Coiflet wavelets are designed to maintain a close match between the trend and the original
signal.

Creates different variety of Daubechie (Daub) wavelets.

Coefficients for Haar wavelet.

Simplified factory for creating

`WaveletTransform`

.Factory for creating sample weight functions of different types.

Renders image interest points in a thread safe manor.

Generic interface for fast corner detection algorithms.

Concurrent version of

`FastCornerDetector`

Low level interface for specific implementations of Fast Corner detectors.

The Fast Hessian (FH) [1] interest point detector is designed to be a fast multi-scale "blob" detector.

Provides a wrapper around a fast corner detector for

`InterestPointDetector`

no non-maximum suppression will be doneHigh level interface for rendering a distorted image into another one.

Generic graph of 2D points.

Conneciton between two nodes.

Base interface for classes which extract intensity images for image feature detection.

Feature detector across image pyramids that uses the Laplacian to determine strength in scale-space.

Detects scale invariant interest/corner points by computing the feature intensities across a pyramid of different scales.

More specialized version of

`SceneRecognition`

where it is assumed the input is composed of image features
that have been detected sparsely at different pixel coordinates.Set of feature pixel and descriptions from a image.

Wrapper around

`RecognitionNearestNeighborInvertedFile`

for `FeatureSceneRecognition`

.High level implementation of

`RecognitionVocabularyTreeNister2006`

for `FeatureSceneRecognition`

.Selects a subset of the features inside the image until it hits the requested number.

Selects features inside the image until it hits a limit.

Selects features periodically in the order they were detected until it hits the limit.

Selects and sorts up to the N best features based on their intensity.

Randomly selects features up to the limit from the set of detected.

Attempts to select features uniformly across the image.

Implementation for

`Point2D_F32`

Implementation for

`Point2D_F64`

Implementation for

`Point2D_I16`

Info for each cell

Attempts to select features uniformly across the image with a preference for locally more intense features.

Info for each cell

Features can belong to multiple set.

Checks to see if the features being tracked form

Used to construct a normalized histogram which represents the frequency of certain words in an image for use
in a BOW based classifier.

Creates a normalized histogram which represents the frequency of different visual words from the set of features.

Uses JavaCV, which uses FFMPEG, to read in a video.

Used to specify which technique should be used when expanding an image's border for use with FFT

Wrapper around

`SegmentFelzenszwalbHuttenlocher04`

for `ImageSuperpixels`

.Computes edge weights for

`SegmentFelzenszwalbHuttenlocher04`

.Computes edge weight as the absolute value of the different in pixel value for single band images.

Computes edge weight as the F-norm different in pixel value for

`Planar`

images.Computes edge weight as the F-norm different in pixel value for

`Planar`

images.Computes edge weight as the absolute value of the different in pixel value for single band images.

Computes edge weight as the absolute value of the different in pixel value for single band images.

Computes edge weight as the F-norm different in pixel value for

`Planar`

images.Computes edge weight as the F-norm different in pixel value for

`Planar`

images.Computes edge weight as the absolute value of the different in pixel value for single band images.

Interface for detecting fiducial markers and their location in the image.

Provides everything you need to convert a image based fiducial detector into one which can estimate
the fiducial's pose given control points.

Rendering engine for fiducials into a gray scale image.

Abstract class for generators images of fiducials

File IO for fiducials.

Interface for rendering fiducials to different document types.

Implementation of

`FiducialRenderEngine`

for a `BufferedImage`

.Generates images of square fiducials

Renders a square hamming fiducial.

Results from fiducial stability computation.

Extension of

`FiducialDetector`

which allows for trackers.Dialog which lets the user selected a known file type and navigate the file system

Opens a dialog which lets the user select a single file but shows a preview of whatever file is currently selected

Lets the listener know what the user has chosen to do.

Filter implementation of

`CensusTransform`

.Census

`GCensusTransform.dense3x3(T, boofcv.struct.image.GrayU8, boofcv.struct.border.ImageBorder<T>)`

transform with output in `GrayU8`

image.Census

`GCensusTransform.dense5x5(T, boofcv.struct.image.GrayS32, boofcv.struct.border.ImageBorder<T>)`

transform with output in `GrayS32`

image.Census transform which saves output in a

`InterleavedU16`

.Census transform which saves output in a

`GrayS64`

.Generalized interface for processing images.

Turns functions into implementations of

`FilterImageInterface`

Wraps around any function which has two images as input and output.Applies a sequence of filters.

Given a list of associated features, find all the unassociated features.

Controls the camera using similar commands as a first person shooting.

Contains the math for adjusting a camera using first person shooting inspired keyboard and mouse controls.

Structure that contains results from fitting a shape to a set of observations.

Refines a set of corner points along a contour by fitting lines to the points between the corners using a
least-squares technique.

Flips the image along the vertical axis.

Flips the image along the vertical axis.

Flips the image along the vertical axis and convert to normalized image coordinates using the
provided transform.

Wrapper around

`DenseOpticalFlowBlockPyramid`

for `DenseOpticalFlow`

.Wrapper around

`DenseOpticalFlowKlt`

for `DenseOpticalFlow`

.Contains the ID and pose for a fiducial

List of detected features that are invariant to scale and in-plane rotation.

Computes the stability for a fiducial using 4 synthetic corners that are position based on the fiducial's
width and height given the current estimated marker to camera transform.

A class to manage the data of audio and video frames.

Defines two methods to convert between a

`Frame`

and another generic
data object that can contain the same data.
Extracts the epipoles from an essential or fundamental matrix.

Base class for linear algebra based algorithms for computing the Fundamental/Essential matrices.

Computes the essential or fundamental matrix using exactly 7 points with linear algebra.

Given a set of 8 or more points this class computes the essential or fundamental matrix.

Computes the Sampson distance residual for a set of observations given a fundamental matrix.

Computes the residual just using the fundamental matrix constraint

Computes projective camera matrices from a fundamental matrix.

Basic path operators on polynomial Galious Field (GF) with GF(2) coefficients.

Precomputed look up table for performing operations on GF polynomials of the specified degree.

Precomputed look up table for performing operations on GF polynomials of the specified degree.

Precomputed look up table for performing operations on GF polynomials of the specified degree.

Interface for computing the scale space of an image and its derivatives.

Generalized functions for applying different image blur operators.

The Census Transform [1] computes a bit mask for each pixel in the image.

Generalized functions for converting between different image types.

Image type agnostic convolution functions

Implementation of functions in

`DiscreteFourierTransformOps`

which are image type agnostic
Detects features which are local maximums and/or local minimums in the feature intensity image.

Extracts corners from a the image and or its gradient.

Wrapper around

`GeneralPurposeFFT_F32_2D`

which implements `DiscreteFourierTransform`

Wrapper around

`GeneralPurposeFFT_F64_2D`

which implements `DiscreteFourierTransform`

Operations that return information about the specific image.

Computes 1D Discrete Fourier Transform (DFT) of complex and real, float
precision data.

Computes 2D Discrete Fourier Transform (DFT) of complex and real, float
precision data.

Computes 1D Discrete Fourier Transform (DFT) of complex and real, double
precision data.

Computes 2D Discrete Fourier Transform (DFT) of complex and real, double
precision data.

Wrapper around

`GeneralFeatureDetector`

to make it compatible with `InterestPointDetector`

.Fits an

`Affine2D_F64`

motion model to a list of `AssociatedPair`

.Wrapper around

`Estimate1ofEpipolar`

for `ModelGenerator`

.Fits a homography to the observed points using linear algebra.

Wrapper around

`ProjectiveToMetricCameras`

and `Estimate1ofTrifocalTensor`

for use in robust model
fitting.Wrapper around

`Estimate1ofPnP`

for `ModelGenerator`

.Given a

`graph of images`

with similar appearance, create a graph in which
images with a geometric relationship are connected to each other.Estimates a

`ScaleTranslate2D`

from two 2D point correspondences.Estimates a

`ScaleTranslateRotate2D`

from three 2D point correspondences.Uses

`MotionTransformPoint`

to estimate the rigid body motion in 2D between two sets of pointsUses

`MotionTransformPoint`

to estimate the rigid body motion
from key-frame to current-frame in 2D between two observations of a point on the plane.Given the

`sparse reconstruction`

, create a `StereoPairGraph`

that
describes which views are compatible for computing dense stereo disparity from.Points visible in the view

Wrapper around

`Estimate1ofTrifocalTensor`

for `ModelGenerator`

Generalized interface for filtering images with convolution kernels.

Generalized interface for filtering images with convolution kernels while skipping pixels.

Dense feature computation which uses

`DescribePointRadiusAngle`

internally.Weakly typed version of

`EnhanceImageOps`

.Geographic coordinate consisting of latitude (north-south coordinate) and longitude (west-east) .

Geographic coordinate consisting of latitude (north-south coordinate) and longitude (west-east) .

Implementation of Geometric Mean filter as describes in [1] with modifications to avoid numerical issues.

Common results of a geometric algorithm.

Creates a single hypothesis for the parameters in a model a set of sample points/observations.

Wrapper that allows

`GeoModelEstimator1`

to be used as a `GeoModelEstimatorN`

.
Creates multiple hypotheses for the parameters in a model a set of sample points/observations.

Wrapper that allows

`GeoModelEstimatorN`

to be used as a `GeoModelEstimator1`

.Operations useful for unit tests

Creates different wavelet transform by specifying the image type.

Image type agnostic version of

`GradientToEdgeFeatures`

.Weakly typed version of

`GrayImageOps`

.
Generic version of

`HistogramFeatureOps`

which determines image type at runtime.Collection of functions that project Bands of Planar images onto
a single image.

Generalized operations related to compute different image derivatives.

Generalized interface for single banded images.

Implementation of

`GImageGray`

that applies a `PixelTransform`

then
`interpolates`

to get the pixel's value.Generalized version of

`ImageMiscOps`

.Generalized interface for working with multi-band images

Generalized version of

`ImageStatistics`

.Functions for computing feature intensity on an image.

Provides a mechanism to call

`IntegralImageOps`

with unknown types at compile time.Contains generalized function with weak typing from

`KernelMath`

.Base class for computing global thresholds

Computes a threshold based on entropy to create a binary image

Computes a threshold using Huang's equation.

Computes a threshold using Li's equation.

Computes a threshold using Otsu's equation.

Used to configure Swing UI settings across all apps

Applies a fixed threshold to an image.

Control panel

Generalized version of

`PixelMath`

.
Several different types of corner detectors [1,2] all share the same initial processing steps.

Generalized code for family of Gradient operators that have the kernels [-1 0 1]**[a b a]

Generalized code for family of Gradient operators that have the kernels [-1 0 1]**[a b a]

Interface for converting a multi-band gradient into a single band gradient.

GradientMultiToSingleBand_Reflection<Input extends ImageMultiBand<Input>,Output extends ImageGray<Output>>

Implementation of

`GradientMultiToSingleBand`

which uses reflection to invoke static
functions.Operations for computing Prewitt image gradient.

Prewitt implementation that shares values for horizontal and vertical gradients

Prewitt implementation that shares values for horizontal and vertical gradients

Contains functions that reduce the number of bands in the input image into a single band.

Implementation of the standard 3x3 Scharr operator.

Computes the image's first derivative along the x and y axises using the Sobel operator.

This implementation of the sobel edge dector is implements it in such as way that the code can be easily read
and verified for correctness, however it is much slower than it needs to be.

While not as fast as

`GradientSobel`

it a big improvement over `GradientSobel_Naive`

and much
more readable.`GradientSobel`

it a big improvement over `GradientSobel_Naive`

and much
more readable.
This is a further improvement on

`GradientSobel_Outer`

where it reduces the number of times the array needs to be
read from by saving past reads in a local variable.`GradientSobel_Outer`

where it reduces the number of times the array needs to be
read from by saving past reads in a local variable.Sparse computation of the Prewitt gradient operator.

Sparse computation of the Prewitt gradient operator.

Sparse computation of the sobel gradient operator.

Sparse computation of the sobel gradient operator.

Sparse computation of the three gradient operator.

Sparse computation of the three gradient operator.

Sparse computation of the two-0 gradient operator.

Sparse computation of the two-0 gradient operator.

Sparse computation of the two-0 gradient operator.

Sparse computation of the two-0 gradient operator.

Computes the image's first derivative along the x and y axises using [-1 0 1] kernel.

This is an attempt to improve the performance by minimizing the number of times arrays are accessed
and partially unrolling loops.

Basic implementation of

`GradientThree`

with nothing fancy is done to improve its performance.
Basic implementation of

`GradientThree`

with nothing fancy is done to improve its performance.
Give the image's gradient in the x and y direction compute the edge's intensity and orientation.

Computes the image's first derivative along the x and y axises using [-1 1] kernel, where the "center" of the
kernel is on the -1.

Basic implementation of

`GradientTwo0`

with nothing fancy is done to improve its performance.
Basic implementation of

`GradientTwo0`

with nothing fancy is done to improve its performance.
Computes the image's first derivative along the x and y axises using [-1 1] kernel, where the "center" of the
kernel is on the 1.

Basic implementation of

`GradientTwo1`

with nothing fancy is done to improve its performance.
Basic implementation of

`GradientTwo1`

with nothing fancy is done to improve its performance.Image gradient at a specific pixel.

Specifies a pixel's gradient using float values.

Specifies a pixel's gradient using double values.

Specifies a pixel's gradient using integer values.

Base class for images with float pixels.

Image with a pixel type of 32-bit float.

Image with a pixel type of 64-bit float.

Base class for all integer images.

Base class for images with 16-bit pixels.

Base class for images with 8-bit pixels.

Pixel-wise operations on gray-scale images.

Image with a pixel type of signed 16-bit integer.

Gray scale image with a pixel type of signed 32-bit integer

Image with a pixel type of signed 64-bit integer

Image with a pixel type of signed 8-bit integer.

Image with a pixel type of unsigned 16-bit integer.

Image with a pixel type of unsigned 8-bit integer.

Coordinate in a 2D grid.

Computes the distance of a point from the line.

Used by

`GridRansacLineDetector`

to fit edgels inside a region to a line.
Line segment feature detector.

Specifies the dimension of a 3D grid

Everything you need to go from a grid coordinate into pixels using a homography

Interface for creating a copy of an image with a border added to it.

Implementations of

`GrowBorder`

for single band images.Weakly typed version of

`ThresholdImageOps`

.Renders Hamming markers inside of chessboard patterns similar to Charuco markers.

List of pre-generated dictionaries

Creates hamming grids

Implementation of

`HarrisCornerIntensity`

.
Implementation of

`HarrisCornerIntensity`

.
The Harris corner detector [1] is similar to the

`ShiTomasiCornerIntensity`

but avoids computing the eigenvalues
directly.Helper class for

`EssentialNister5`

.
Detects "blob" intensity using the image's second derivative.

Different types of Hessian blob detectors

These functions compute the image hessian by computing the image gradient twice.

Computes the second derivative (Hessian) of an image using.

Prewitt implementation that shares values for horizontal and vertical gradients

Computes the second derivative (Hessian) of an image using.

Basic implementation of

`HessianThree`

with nothing fancy is done to improve its performance.
Computes the determinant of a Hessian computed by differentiating using [-1 0 1] kernel.

f(x,y) = Lxx*Lyy - Lxy

The Lxx and Lyy have a kernel of [1 0 -2 0 1] and Lxy is:

f(x,y) = Lxx*Lyy - Lxy

^{2}The Lxx and Lyy have a kernel of [1 0 -2 0 1] and Lxy is:

Hessian-Three derivative which processes the outer image border only

Hessian-Three derivative which processes the inner image only

Hessian-Three derivative which processes the inner image only

A hierarchical tree which discretizes an N-Dimensional space.

Node in the Vocabulary tree

A multi dimensional histogram.

2D histogram used to count.

Type specific operations for creating histgrams of image pixel values.

Histogram which represents the frequency of different types of words in a single image.

Operations related to computing statistics from histograms.

Displays the image's histogram and shows the innerlier set for a simple threshold

Using linear algebra it computes a planar homography matrix using 2D points, 3D points, or conics.

Wrapper around

`HomographyDirectLinearTransform`

for `Estimate1ofEpipolar`

.
Computes the homography induced by a plane from 2 line correspondences.

Computes the homography induced by a plane from 3 point correspondences.

Computes the homography induced by a plane from correspondences of a line and a point.

Estimates homography between two views and independent radial distortion from each camera.

Estimated homography matrix and radial distortion terms

Computes the Sampson distance residual for a set of observations given a homography matrix.

Computes the difference between the point projected by the homography and its observed location.

Wrapper around

`HomographyTotalLeastSquares`

for `Estimate1ofEpipolar`

.Direct method for computing homography that is more computationally efficient and stable than DLT.

This is Horn-Schunck's well known work [1] for dense optical flow estimation.

Implementation of

`HornSchunck`

for `GrayF32`

.Implementation of

`DenseOpticalFlow`

for `HornSchunck`

.Implementation of

`HornSchunck`

for `GrayF32`

.
Pyramidal implementation of Horn-Schunck [2] based on the discussion in [1].

Implementation of

`DenseOpticalFlow`

for `HornSchunck`

.Converts

`HoughTransformBinary`

into `DetectLine`

Converts

`HoughTransformGradient`

into `DetectLine`

`HoughTransformParameters`

with a foot-of-norm parameterization.`HoughTransformParameters`

with a polar parameterization.
Hough transform which uses a polar line representation, distance from origin and angle (0 to 180 degrees).

Concurrent version of

`HoughTransformBinary`

Base class for Hough transforms which use a pixel coordinate and the gradient to describe a line.

Concurrent version of

`HoughTransformGradient`

Parameterizes a line to a coordinate for the Hough transform.

An image feature track for

`HybridTrackerScalePoint`

.
Combines a KLT tracker with Detect-Describe-Associate type trackers.

Given the output from edge non-maximum suppression, perform hysteresis threshold along the edge and mark selected
pixels in a binary image.

Given the output from edge non-maximum suppression, perform hysteresis threshold along the edge and constructs
a list of pixels belonging to each contour.

This exception is thrown when an attempt has been made to access part of an
image which is out of bounds.

DO NOT MODIFY: Generated by boofcv.alg.misc.GenerateImageBandMath.

Base class for all image types.

Lambda for each (x,y) coordinate in the image

Displays labeled binary images.

Used for displaying binary images.

A wrapper around a normal image that returns a numeric value if a pixel is requested that is outside of the image
boundary.

Child of

`ImageBorder`

for `GrayF32`

.Child of

`ImageBorder`

for `GrayF64`

.Child of

`ImageBorder`

for `InterleavedF32`

.Child of

`ImageBorder`

for `InterleavedF32`

.Child of

`ImageBorder`

for `InterleavedInteger`

.Child of

`ImageBorder`

for `InterleavedInteger`

.Child of

`ImageBorder`

for `GrayI`

.Child of

`ImageBorder`

for `GrayI`

.Interface for classes that modify the coordinate of a pixel so that it will always reference a pixel inside
the image.

Image border is handled independently along each axis by changing the indexes so that it references a pixel
inside the image.

All points outside of the image will return the specified value

Wraps a larger image and treats the inner portion as a regular image and uses the border pixels as a look up
table for external ones.

High level interface for a class which creates a variable length description of an image in text.

Displays a set of images and what their assigned labels are

High level interface for a classifier which assigns a single category to an image.

Provides information on the score for a specific category when multiple results are requested

Pretrained Network-in-Network (NiN) image classifier using imagenet data.

1) Look at Torch source code
a) Determine the shape of the input tensor.

Image classification using VGG network trained in CIFAR 10 data.

Abstract class for sparse image convolution.

Computes the fraction / percent of an image which is covered by image features.

Specifies if each cell in the grid contains at least one feature

Describes the physical characteristics of the internal primitive data types inside the image

Implementation of 'Moving Least Squares' (MLS) control point based image deformation models described in [1].

Abstract interface for computing image derivatives.

Specifies the width and height of an image

Copies an image onto another image while applying a transform to the pixel coordinates.

ImageDistortBasic<Input extends ImageBase<Input>,Output extends ImageBase<Output>,Interpolate extends InterpolatePixel<Input>>

Most basic implementation of

`ImageDistort`

.Most basic implementation of

`ImageDistort`

for `ImageInterleaved`

.ImageDistortBasic_IL_MT<Input extends ImageInterleaved<Input>,Output extends ImageInterleaved<Output>>

Most basic implementation of

`ImageDistort`

for `ImageInterleaved`

.Most basic implementation of

`ImageDistort`

for `ImageGray`

.Most basic implementation of

`ImageDistort`

for `ImageGray`

.Except for very simple functions, computing the per pixel distortion is an expensive operation.

Except for very simple functions, computing the per pixel distortion is an expensive operation.

Used to create image data types

Iterator that returns images loaded from disk

The dense optical flow of an image.

Specifies the optical flow for a single pixel.

Interface for computing the output of functions which take as an input an image and a pixel coordinate.

Creates a new instance of an image of a specific configuration.

A generic interface for computing first order image derivative along the x and y axes.

Finds the derivative using a Gaussian kernel.

Wrapper for applying image gradients to

`Planar`

images.Generic implementation which uses reflections to call derivative functions

ImageGradientThenReduce<Input extends ImageMultiBand<Input>,Middle extends ImageMultiBand<Middle>,Output extends ImageGray<Output>>

First computes a multi-band image gradient then reduces the number of bands in the gradient
to one.

A base class for a single band intensity image.

Breaks the image up into a grid.

Displays images in a grid pattern

A generic interface for computing image's second derivatives given the image's gradient.

Generic implementation which uses reflections to call hessian functions

A generic interface for computing image's second derivatives directly from the source image.

Generic implementation which uses reflections to call hessian functions

Draws a histogram of the image's pixel intensity level

Base class for images that contain multiple interleaved bands.

Operations to help with testing interleaved images.

Image filters which have been abstracted using lambdas.

Image filters which have been abstracted using lambdas.

Computes the line integral of a line segment across the image.

Draws lines over an image.

Draws lines over an image.

Functions for pruning and merging lines.

Provides different functions for normalizing the spatially local statics of an image.

Basic image operations which have no place better to go.

Base class for algorithms which process an image and load a model to do so

Estimates the 2D motion of images in a video sequence.

Computes the transform from the first image in a sequence to the current frame.

Examines tracks inside of

`ImageMotionPointTrackerKey`

and decides when new feature tracks should be respawned.Base class for images with multiple bands.

Functions related to adjusting input pixels to ensure they have a known and fixed range.

Simple JPanel for displaying buffered images.

Generalized interface for sensors which allow pixels in an image to be converted into
3D world coordinates.

Image pyramids represent an image at different resolution in a fine to coarse fashion.

Base class for image pyramids.

Displays an

`ImagePyramid`

by listing each of its layers and showing them one at a time.Axis aligned rectangle with integer values for use on images.

Specifies an axis aligned rectangle inside an image using lower and upper extents.

Specifies an axis aligned rectangle inside an image using lower and upper extents.

Statistics on how accurately the found model fit each image during calibration.

Useful functions related to image segmentation

Very simple similarity test that looks at the ratio of total features in each image to the number of matched
features.

Computes statistical properties of pixels inside an image.

Given a sequence of images encoded with

`CombineFilesTogether`

, it will read the files from
the stream and decode them.High level interface for computing superpixels.

Specifies the type of image data structure.

Simple JPanel for displaying buffered images allows images to be zoomed in and out of

* Overlays a rectangular grid on top of the src image and computes the average value within each cell
which is then written into the dst image.

Implementation of

`AverageDownSampleOps`

specialized for square regions of width 2.Implementation of

`AverageDownSampleOps`

specialized for square regions of width 2.Implementation of

`AverageDownSampleOps`

specialized for square regions of width N.Implementation of

`AverageDownSampleOps`

specialized for square regions of width N.
Implementation of

`BilinearPixelS`

for a specific image type.
Implementation of

`BilinearPixelS`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelMB`

for a specific image type.
Implementation of

`BilinearPixelS`

for a specific image type.
Implementation of

`BilinearPixelS`

for a specific image type.
Implementation of

`BilinearPixelS`

for a specific image type.
Implementation of

`BilinearPixelS`

for a specific image type.Binary operations performed only along the image's border.

Implementation for all operations which are not seperated by inner and outer algorithms

Implementation for all operations which are not seperated by inner and outer algorithms

Optimized binary operations for the interior of an image only.

Optimized binary operations for the interior of an image only.

Simple unoptimized implementations of binary operations.

Operations for handling borders in a Census Transform.

Implementations of Census transform.

Implementations of Census transform.

Low level implementation of function for converting HSV images.

Low level implementation of function for converting HSV images.

Low level implementation of function for converting LAB images.

Low level implementation of function for converting LAB images.

Low level implementation of function for converting RGB images.

Low level implementation of function for converting RGB images.

Low level implementation of function for converting XYZ images.

Low level implementation of function for converting XYZ images.

Low level implementation of function for converting YUV images.

Low level implementation of function for converting YUV images.

Functions for converting between different primitive image types.

Functions for converting between different primitive image types.

Low level operations for converting JCodec images.

NV21: The format is densely packed.

NV21: The format is densely packed.

`BufferedImage`

that use its internal
raster for better performance.`BufferedImage`

that use its internal
raster for better performance.Low level implementation of YUV-420 to RGB-888

Implementation of

`ConvertYuyv`

Implementation of

`ConvertYuyv`

Implementations of

`ConvertYV12`

Implementations of

`ConvertYV12`

Convolves a box filter across an image.

Convolves a box filter across an image.

Convolves a mean filter across the image.

Convolves a mean filter across the image.

Implementation of

`DescribePointBinaryCompare`

for a specific image type.
Implementation of

`DescribePointBinaryCompare`

for a specific image type.Implementation of

`DescribePointRawPixels`

.Implementation of

`DescribePointRawPixels`

.Implementation of

`DescribePointPixelRegionNCC`

.Implementation of

`DescribePointPixelRegionNCC`

.Algorithms for performing non-max suppression.

Algorithms for performing non-max suppression.

Implementations of the crude version of non-maximum edge suppression.

Implementations of the crude version of non-maximum edge suppression.

Filter based functions for image enhancement.

Filter based functions for image enhancement.

Functions for enhancing images using the image histogram.

Functions for enhancing images using the image histogram.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Contains logic for detecting fast corners.

Helper functions for

`FastCornerDetector`

with `GrayF32`

images.Helper functions for

`FastCornerDetector`

with `GrayU8`

images.
Implementations of the core algorithms of

`GradientToEdgeFeatures`

.
Implementations of the core algorithms of

`GradientToEdgeFeatures`

.
Contains implementations of algorithms in

`GrayImageOps`

.Implementation of

`GridRansacLineDetector`

for `GrayF32`

Implementation of

`GridRansacLineDetector`

for `GrayS16`

Implementations of

`HessianBlobIntensity`

.Implementation of algorithms in ImageBandMath

Implementation of algorithms in ImageBandMath

Implementation of

`ImageDistort`

for `Planar`

images.Implementations of functions for

`ImageMiscOps`

Implementations of functions for

`ImageMiscOps`

Computes statistical properties of pixels inside an image.

Computes statistical properties of pixels inside an image.

Compute the integral image for different types of input images.

Compute the integral image for different types of input images.

Routines for computing the intensity of the fast hessian features in an image.

Routines for computing the intensity of the fast hessian features in an image.

Compute the integral image for different types of input images.

Performs interpolation by convolving a continuous-discrete function across the image.

Performs interpolation by convolving a continuous-discrete function across the image.

Performs interpolation by convolving a continuous-discrete function across the image.

Implementations of

`KitRosCornerIntensity`

.Implementations of

`MedianCornerIntensity`

.
A faster version of the histogram median filter that only processes the inner portion of the image.

A faster version of the histogram median filter that only processes the inner portion of the image.

Simple implementation of a histogram based median filter.

Median filter which process only the image edges and uses quick select find the median.

Median filter which uses quick select to find the local median value.

Median filter which uses quick select to find the local median value.

Implementation of functions in

`MultiViewStereoOps`

.
Implementation of

`OrientationAverage`

for a specific image type.
Implementation of

`OrientationAverage`

for a specific image type.
Implementation of

`OrientationAverage`

for a specific image type.
Estimates the orientation of a region by computing the image derivative from an integral image.

Implementation of

`OrientationHistogram`

for a specific image type.
Implementation of

`OrientationHistogram`

for a specific image type.
Implementation of

`OrientationHistogram`

for a specific image type.
Implementation of

`OrientationImageAverage`

for a specific image type.
Implementation of

`OrientationImageAverage`

for a specific image type.
Estimates the orientation of a region using a "derivative free" method.

Implementation of

`OrientationSlidingWindow`

for a specific image type.
Implementation of

`OrientationSlidingWindow`

for a specific image type.
Implementation of

`OrientationSlidingWindow`

for a specific image type.
Implementation of

`OrientationSlidingWindow`

for integral images.Implementation of

`PerspectiveOps`

functions for 32-bit floatsImplementation of

`PerspectiveOps`

functions for 64-bit floatsImplementation of algorithms in PixelMath

Implementation of algorithms in PixelMath

Implementation of

`PolynomialPixel`

.
Implementation of

`PolynomialPixel`

.
Image type specific implementations of functions in

`PyramidOps`

.
Image type specific implementations of functions in

`PyramidOps`

.
Implementation of functions inside of

`RectifyImageOps`

for 32-bit floats
Implementation of functions inside of

`RectifyImageOps`

for 64-bit floats
Implementation of

`ImplSsdCornerBase`

for `GrayF32`

.
Implementation of

`ImplSsdCornerBase`

for `GrayF32`

.
Implementation of

`ImplSsdCornerBase`

for `GrayS16`

.
Implementation of

`ImplSsdCornerBase`

for `GrayS16`

.
Several corner detector algorithms work by computing a symmetric matrix whose elements are composed of the convolution
of the image's gradient squared.

Unweigthed or box filter version of

`ImplSsdCornerBase`

Naive implementation of

`ShiTomasiCornerIntensity`

which performs computations in a straight
forward but inefficient manor.Implementation of SSD Weighted Corner for

`GrayF32`

images.Implementation of SSD Weighted Corner for

`GrayF32`

images.Implementation of SSD Weighted Corner for

`GrayS16`

images.Implementation of SSD Weighted Corner for

`GrayS16`

images.Design Note:
When estimating the

Operations for thresholding images and converting them into a binary image.

Operations for thresholding images and converting them into a binary image.

Performs the wavelet transform just around the image border.

Standard algorithm for forward and inverse wavelet transform which has been optimized to only
process the inner portion of the image by excluding the border.

Unoptimized and simplistic implementation of a forward and inverse wavelet transform across one
level.

X-Corner detector

X-Corner detector

The start and length of segment inside an array

Given a set of views and a set of features which are visible in all views, estimate their metric structure.

Given a set of views and a set of features which are visible in all views, estimate their structure up to a
projective transform.

Operations for basic sanity checks on function arguments.

Interface for threshold filters

`InputToBinary`

which will convert the input image into the specified type prior to processing.Routines for computing the intensity of the fast hessian features in an image.

Common operations for dealing with integral images.

Convolution kernel for an integral image.

Interface for automatic interest point detection in an image.

Implements most functions and provides reasonable default values.

Provides the capability to tack on a different algorithm for the feature's location, scale, and orientation.

Interest point detector for

`Scale Space`

images.Interest point detector for

`Scale-Space Pyramid`

images.`ImageInterleaved`

for data of type float.`ImageInterleaved`

for data of type double.`ImageInterleaved`

for data of type short.`ImageInterleaved`

for data of type byte.Functions related to interleaved images.

Base class for integer interleaved images.

An image where the primitive type is an unsigned short.

`ImageInterleaved`

for data of type int.`ImageInterleaved`

for data of type int.
An image where the primitive type is a signed byte.

An image where the primitive type is an unsigned short.

An image where the primitive type is an unsigned byte.

Provides much of the basic house keeping needed for interpolating 1D data.

Do linear interpolation between points in an array.

Interface for interpolation between pixels on a per-pixel basis.

Wrapper that allows a

`InterpolatePixelS`

to be used as a `InterpolatePixelMB`

,
input image has to be `ImageGray`

.Applies distortion to a coordinate then samples the image
with interpolation.

Interface for interpolation between pixels on a per-pixel basis for a multi-band image.

Interface for interpolation between pixels on a per-pixel basis for a single band image.

Performs interpolation across a whole rectangular region inside the image.

List of built in interpolation algorithms.

Exception used to indicate that some thing went wrong when extract the calibration grid's points.

The inverted file is a list of images that were observed in a particular node.

Wrapper around

`PnPInfinitesimalPlanePoseEstimation`

for `Estimate1ofPnP`

.Implements

`Iterator`

for a range of numbers in a List.Extension of

`Iterator`

which allows you to call `IteratorReset.reset()`

to return it to its original stateGeneralized computation for jacobian of 3D rotation matrix

Implements a numerical Jacobian for the SO3

Jacobian for 4-tuple encoded

`Quaternion`

(w,x,y,z).Jacobian for 3-tuple encoded Rodrigues.

A utility class to copy data between

`Frame`

and `BufferedImage`

.Class for launching JVMs.

Combines a spinner with a double value to reduce clutter.

Media manager for JCodec

Reads movie files using JCodec

Instance of

`VideoInterface`

which uses `JCodecSimplified`

.Control for setting the value of a

`ConfigLength`

class.Create a sequence from an array of jpeg images in byte[] array format.

Configuration widget for specifying the number of levels in a discrete pyramid

Combines a spinner with a double value to reduce clutter.

Panel which uses

`SpringLayout`

.Backwards project from a distorted 2D pixel to 3D unit sphere coordinate using the

`CameraKannalaBrandt`

model.`CameraKannalaBrandt`

model.Forward projection model for

`CameraKannalaBrandt`

.Forward projection model for

`CameraKannalaBrandt`

.Common functions for computing the forward and inverse model.

Common functions for computing the forward and inverse model.

Distance for word histograms

Distance using

`TupleDesc_F32`

for a `KdTree`

.Distance using

`TupleDesc_F64`

for a `KdTree`

.This is a kernel in a 1D convolution.

Floating point 1D convolution kernel that extends

`Kernel1D`

.Floating point 1D convolution kernel that extends

`Kernel1D`

.Floating point 1D convolution kernel that extends

`Kernel1D`

.Base type for 2D convolution kernels

This is a kernel in a 2D convolution.

This is a kernel in a 2D convolution.

This is a kernel in a 2D convolution.

Base class for all convolution kernels.

Computes the instantaneous value of a continuous valued function.

Operations for manipulating different convolution kernels.

Specifies a size of a 2D kernel with a radius along each axis.

Computes key points from an observed hexagonal circular grid.

Computes key points from an observed regular circular grid.

Implementation of the Kitchen and Rosenfeld corner detector as described in [1].

Contains feature information for

`KltTracker`

.
A Kanade-Lucas-Tomasi (KLT) [1,2,3,4] point feature tracker for a single layer gray scale image.

Different types of faults that can cause a KLT track to be dropped.

For reading and writing images which have been labeled with polygon regions.

Encodes a labeled image using Run Line Encoding (RLE) to reduce file size.

Langrange's formula is a straight forward way to perform polynomial interpolation.

The graph is constructed using a depth first search.

Learns node weights in the

`HierarchicalVocabularyTree`

for use in `RecognitionVocabularyTreeNister2006`

by counting the number of unique images a specific node/word appears in then computes the weight using an entropy
like cost function.Abstract class which provides a frame work for learning a scene classifier from a set of images.

Extension of

`LeastMedianOfSquares`

for two calibrated camera views.LeastMedianOfSquares for dealing with projective geometry.

Improves upon the initial estimate of the Fundamental matrix by minimizing the error.

Improves upon the initial estimate of the Homography matrix by minimizing residuals.

Specifies how long and in what units something is.

`Brown`

lens distortion model point transforms.`Division`

lens distortion model point transforms.Factory for lens distortion given different built-in camera models.

Factory for creating forwards and backwards transforms using

`CameraKannalaBrandt`

HInterface for creating transform between distorted and undistorted pixel/normalized-2D image
cordinates for camera models that supports FOV less than 180 degrees.

Operations for manipulating lens distortion which do not have F32 and F64 equivalents.

Operations related to manipulating lens distortion in images

Operations related to manipulating lens distortion in images

Projection when there is no lens distortion

Distortion for

`CameraUniversalOmni`

.Interface for creating transform between distorted and undistorted pixel/unit sphere
coordinates for camera models that supports FOV more than 180 degrees.

Creates a histogram in a color image and is used to identify the likelihood of an color being a member
of the original distribution.

Creates a histogram in a gray scale image which is then used to compute the likelihood of a color being a
member of the original distribution based on its frequency.

TODO redo comments
Converts an RGB image into HSV image to add invariance to changes in lighting conditions.

Converts an RGB image into HSV image to add invariance to changes in lighting conditions.

Finds objects in a binary image by tracing their contours.

Finds the external contours of binary blobs in linear time.

Operations for working with lines detected inside an image.

Displays a list of items and their respective data.

Compact format for storing 2D points as a single integer in an array.

Describes a document or marker which is described using

`LLAH features`

.Describes a LLAH feature.

Functions related to computing the hash values of a LLAH feature.

Hash table that stores LLAH features.

Specifies the type of invariant used when computing LLAH

Locally Likely Arrangement Hashing (LLAH) [1] computes a descriptor for a landmark based on the local geometry of
neighboring landmarks on the image plane.

Used to relate observed dots to landmarks in a document

Documents that were found to match observed dots

Loads all the images in a directory that have the specified suffix.

Loads and optionally scales all the images in a list.

Adaptive/local threshold using a Gaussian region

Adaptive/local threshold using a square region

Computes a local histogram weighted using a Gaussian function.

Used to retrieve information about a view's camera.

Extracts the RGB color from an image

Specific implementations of

`LookUpColorRgb`

Implementation of

`LookUpImages`

that converts the name into an integer.The image ID or name is assumed to the path to the image

Implementation of

`LookUpImages`

that converts the name into an integer and grabs images from memory.Used to look up images as needed for disparity calculation.

Interface for finding images with a similar appearance and identifying point features which are related
between the two images.

Lists of operations used by various multi-view algorithms, but not of use to the typical user.

Forces the smallest singular value in the matrix to be zero

Found match during template matching.

Specifies the meaning of a match score.

A matrix of Lists for containing items in a grid.

Designed to be frame rate independent and maximize geometric information across frames.

Selects the point which is the farthest away from the line.

Wrapper around

`SegmentMeanShift`

for `ImageSuperpixels`

.Likelihood functions that can be used with mean-shift tracking

Simple implementations of mean-shift intended to finding local peaks inside an intensity image.

Wrapper around

`MeanShiftPeak`

for `SearchLocalPeak`

Abstract interface for accessing files, images, and videos.

Corner detector based on median filter.

Merges together regions which have modes close to each other and have a similar color.

Finds regions which are too small and merges them with a neighbor that is the most similar to it and connected.

Node in a graph.

Different functions that compute a synthetic colors for each surface in a mesh.

Provides access to an arbitrary mesh.

Displays a rendered mesh in 3D and provides mouse and keyboard controls for moving the camera.

Lets the user configure controls and provides help explaining how to control.

Contains everything you need to do metric bundle adjustment in one location

Describes the camera pose and intrinsic parameters for a set of cameras.

Results of upgrading a three view scenario from a projective into a metric scene.

Expands a metric

`scene`

by one view (the taget) using the geometric relationship between
the target and two known metric views.Solution for a view's metric state from a particular approach/set of assumptions.

Fully computes views (intrinsics + SE3) for each view and saves which observations were inliers.

Records which scenes have grown to include which views.

Contains information about which scenes contain this specific view

Merges two scenes together after their metric elevation.

Specifies which two 'views" in each scene reference the same pairwise view.

Performs various checks to see if a scene is physically possible.

Given a view and set of views connected to it, attempt to create a new metric scene.

Information about a detected Micro QR Code.

Specifies information about the data in this marker

Error correction level

After the data bits have been read this will decode them and extract a meaningful message.

Given an image and a known location of a Micro QR Code, decode the marker.

High level interface for reading Micro QR Codes from gray scale images

Wrapper around

`MicroQrCodeDetector`

which allows the 3D pose of a Micro QR Code to be detected using
`FiducialDetectorPnP`

.Provides an easy to use interface for specifying QR-Code parameters and generating the raw data sequence.

Generates an image of a Micro QR Code.

Masks that are applied to Micro QR codes to ensure that there are no regions with "structure" in them.

A Micro QR-Code detector which is designed to find the location of corners in the finder pattern precisely.

Utilities when estimating the 3D pose of a Micro QR Code.

Prunes corners from a pixel level accuracy contour by minizing a penalized energy function.

An output stream which redirects the data into two different streams.

Instead of loading and decompressing the whole MJPEG at once, it loads the images
one at a time until it reaches the end of the file.

Base class for when you want to change the output type of a

`ModelMatcherMultiview`

.Base class for when you want to change the output type of a

`ModelMatcherMultiview`

.Wrapper that enables you to estimate an essential matrix while using a rigid body model

For use in cases where the model is a matrix and there is a 1-to-1 relationship with model
parameters.

Provides default implementations of ModelFitter functions.

Provides default implementations of

`ModelGenerator`

.`ModelGenerator`

with view specific information`ModelManager`

for 3x3 `DMatrixRMaj`

.Provides default implementations of ModelMatcher functions.

`ModelMatcher`

for multiview problems.`ModelMatcher`

for multiview problems.`ModelMatcher`

with view specific information
Residual function for epipolar matrices with a single output for a single input.

Residual function for epipolar matrices where there are multiple outputs for a single input.

Estimates the camera's motion relative to the ground plane.

* Wrapper around

`MonocularPlaneVisualOdometry`

which scales the input images.
Interface for visual odometry from a single camera that provides 6-DOF pose.

Wrapper around

`VisOdomMonoOverheadMotion2D`

for `MonocularPlaneVisualOdometry`

.Wrapper around

`VisOdomMonoPlaneInfinity`

for `MonocularPlaneVisualOdometry`

.Calibration parameters when the intrinsic parameters for a single camera is known and the location
of the camera relative to the ground plane.

Operations related to simulating motion blur

Toggles a paused variable on each mouse click

Computes a moving average with a decay function
decay variable sets how quickly the average is updated.

Wrapper around

`TrackerMeanShiftLikelihood`

for `TrackerObjectQuad`

Given a set of disparity images, all of which were computed from the same left image, fuse into a single
disparity image.

Solution for the Multi Baseline Stereo (MBS) problem which uses independently computed stereo
disparity images [1] with one common "center" image.

Used to gain access to intermediate results

Intrinsic and extrinsic calibration for a multi