Difference between revisions of "Tutorial Quick Start"

From BoofCV
Jump to navigationJump to search
m
m (→‎Filters: createImage -> createSingleBand)
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
The following tutorial is intended to provide just enough information for you to quickly set up and start development with BoofCV.  If you are not familiar with the [http://java.oracle.com Java programming language] or its associated development tools, you must fix that first because BoofCV is written entirely in Java. The first step in using BoofCV is downloading or compiling its jars, see download link below.  
The following tutorial is intended to provide just enough information for you to quickly set up and start development with BoofCV.  If you are not familiar with the [http://java.oracle.com Java programming language] or its associated development tools, you must fix that first because BoofCV is written entirely in Java. It is highly recommended that you use a tool like [https://gradle.org Gradle] to build your own project and have it download the jars for you.  If you enjoy doing things the slow and tedious way we are there for you and provide all the jars.


Latest Official Release: [[Download:BoofCV|Download Page]]
== Step One: Obtaining ==


BoofCV depends on several other libraries, listed below.  For your convenience their jars have been included with BoofCV.  If you use Maven the dependencies are automatically downloaded.  Just look in the boofcv/lib directory to access the jar files.  
The first step in using BoofCV is either adding it to your dependency list, downloading the precompiled Jars, or building it from Source.  


{| class="wikitable"
Latest Official Release: 
! Jar Name !! Package Name and Website
* Source Code [[Download:BoofCV|Download Page]]
|-
* Jars [[Download:BoofCV|Download Page]]
| EJML.jar || [http://code.google.com/p/efficient-java-matrix-library/ Efficient Java Matrix Library]  
* Maven and Gradle [[Download:BoofCV|Download Page]]
|-
 
| GeoRegression.jar || [http://georegression.org/ Geometric Regression Library]
== Step Two: Running Examples and Demonstrations ==
|-
 
| DDogleg.jar || [http://ddogleg.org/ DDogleg Numerics]
Examples are short pieces of code which are designed to be easy to understand and show you how to perform some task.  Demonstrations are more complex applications which visualize different aspects of an algorithm.  The code for a demonstration is not designed to be easy to learn from and can be quite complex due to its integration with a GUI.
|}
 
Source Code:
* [https://github.com/lessthanoptimal/BoofCV/tree/master/examples boofcv/examples]
* [https://github.com/lessthanoptimal/BoofCV/tree/master/demonstrations boofcv/demonstrations]
 
The easiest way to run an example or demonstration is to launch their respective master applications. You can also load up the source code in your favorite IDE and run the applications directly.


Once you got BoofCV added to your project, try compiling one of the many examples in the boofcv/examples directory. For instance, the ExampleBinaryImage.java is fairly simple and applies binary image operations to an image. You might need to change the path its input files. 
<syntaxhighlight lang="bash">
cd boofcv
./gradlew examples
java -jar examples/examples.jar
./gradlew demonstrations
java -jar demonstrations/demonstrations.jar
</syntaxhighlight>


= HELP ME!! =
= HELP ME!! =


Having trouble or have a suggestion?  Post a message on the BoofCV message board! Don't worry it's a friendly place.
Having trouble or have a suggestion?  Post a message on the BoofCV message board! Don't worry it's a friendly place.
* [http://groups.google.com/group/boofcv message board]   
* [http://groups.google.com/group/boofcv Message Board]   


= Quick Reference =
= Quick Reference =
Line 41: Line 52:


== The Basics ==
== The Basics ==
<pre>
 
ImageUInt8 image = new ImageUInt8 (100,150);
BoofCV supports 3 types of images; Gray (single band images), Planar (multi-band in a planar format), and Interleaved (traditional multi-band image format).  The first two, gray and planar are fully supported while interleaved is partially supported.  Gray and planar images are just easier to work with most of the time which is why they are fully supported.  Interleaved is only supported where there is a performance advantage that was significant.
</pre>
 
 
<syntaxhighlight lang="java">
GrayU8 image = new GrayU8(100,150);
</syntaxhighlight>
Creating an unsigned 8-bit integer single band image with width=100 and height=150.
Creating an unsigned 8-bit integer single band image with width=100 and height=150.


<pre>
<syntaxhighlight lang="java">
ImageFloat32 image = new ImageFloat32(100,150);
GrayF32 image = new GrayF32(100,150);
</pre>
</syntaxhighlight>
Creating a floating point single band image with width=100 and height=150.
Creating a floating point single band image with width=100 and height=150.


<pre>
<syntaxhighlight lang="java">
MultiSpectral<ImageUInt8> image = new MultiSpectral<ImageUInt8>(ImageUInt8.class,100,200,3);
Planar<GrayU8> image = new Planar<GrayU8>(GrayU8.class,100,200,3);
</pre>
</syntaxhighlight>
Creates a color multi spectral image with 3 bands using ImageUInt8 for each band.
Creates a color planar image with 3 bands using GrayU8 for each band.


<pre>
<syntaxhighlight lang="java">
ImageFloat32 image = UtilImageIO.loadImage("test.png",ImageFloat32.class);
GrayF32 image = UtilImageIO.loadImage("test.png",GrayF32.class);
</pre>
</syntaxhighlight>
Loads a single band image of type ImageFloat32 from a file.
Loads a single band image of type GrayF32 from a file.


<pre>
<syntaxhighlight lang="java">
public static <T extends ImageBase> T generic( Class<T> imageType ) {
public static <T extends ImageBase> T generic( Class<T> imageType ) {
T image = UtilImageIO.loadImage("test.png",imageType);
T image = UtilImageIO.loadImage("test.png",imageType);
</pre>
</syntaxhighlight>
Loads an image with the specified type inside a function that uses Java generics.
Loads an image with the specified type inside a function that uses Java generics.


<pre>
<syntaxhighlight lang="java">
BufferedImage out = ConvertBufferedImage.convertTo(image,null);
BufferedImage out = ConvertBufferedImage.convertTo(image,null);
</pre>
</syntaxhighlight>
Converts an image into a BufferedImage to provide better integration with Java2D (display/saving).  Pixel values must be in the range of 0 to 255.
Converts an image into a BufferedImage to provide better integration with Java2D (display/saving).  Pixel values must be in the range of 0 to 255.


<pre>
<syntaxhighlight lang="java">
BufferedImage out = VisualizeImageData.grayMagnitude(derivX,null,-1);
BufferedImage out = VisualizeImageData.grayMagnitude(derivX,null,-1);
</pre>
</syntaxhighlight>
Renders a signed single band image into a gray intensity image.
Renders a signed single band image into a gray intensity image.


<pre>
<syntaxhighlight lang="java">
BufferedImage out = VisualizeImageData.colorizeSign(derivX,null,-1);
BufferedImage out = VisualizeImageData.colorizeSign(derivX,null,-1);
</pre>
</syntaxhighlight>
Renders a signed single band image into a color intensity image.
Renders a signed single band image into a color intensity image.


<pre>
<syntaxhighlight lang="java">
BufferedImage out = ConvertBufferedImage.convertTo(image,null);
BufferedImage out = ConvertBufferedImage.convertTo(image,null);
ShowImages.showWindow(out,"Output");
ShowImages.showWindow(out,"Output");
</pre>
</syntaxhighlight>
Displays an image in a window using Java swing.
Displays an image in a window using Java swing.


Line 92: Line 107:
The image type must be known to access pixel information.  The following show how to access pixels for different image types.  For more information on the image data structure and direct access to the raw data array see [[Tutorial Images]] for more details
The image type must be known to access pixel information.  The following show how to access pixels for different image types.  For more information on the image data structure and direct access to the raw data array see [[Tutorial Images]] for more details


<pre>
<syntaxhighlight lang="java">
public static void function( ImageFloat32 image )
public static void function( GrayF32 image )
{
{
float pixel = image.get(5,23);
float pixel = image.get(5,23);
image.set(5,23,50.3);
image.set(5,23,50.3);
</pre>
</syntaxhighlight>
Gets and sets the pixel at (5,23).  Note that set() and get() functions are image type specific.  In other words, you can't access pixel without knowing the image type.
Gets and sets the pixel at (5,23).  Note that set() and get() functions are image type specific.  In other words, you can't access pixel without knowing the image type.


<pre>
<syntaxhighlight lang="java">
public static void function( ImageUInt8 image )
public static void function( GrayU8 image )
{
{
int pixel = image.get(5,23);
int pixel = image.get(5,23);
image.set(5,23,50);
image.set(5,23,50);
</pre>
</syntaxhighlight>
Similar to the above example but for an 8-bit unsigned integer image.  Note the image.get() returns 'int' and not 'byte'.   
Similar to the above example but for an 8-bit unsigned integer image.  Note the image.get() returns 'int' and not 'byte'.   


<pre>
<syntaxhighlight lang="java">
public static void function( ImageInteger image )
public static void function( GrayI image )
{
{
int pixel = image.get(5,23);
int pixel = image.get(5,23);
image.set(5,23,50);
image.set(5,23,50);
</pre>
</syntaxhighlight>
In fact the same code will work for all integer images, except SInt64 which uses longs and not ints. Internally UInt8 stores its pixels as a byte array, but set() and get() return int because Java internally does not use bytes on the register.
In fact the same code will work for all integer images, except SInt64 which uses longs and not ints. Internally UInt8 stores its pixels as a byte array, but set() and get() return int because Java internally does not use bytes on the register.




<pre>
<syntaxhighlight lang="java">
public static void function( MultiSpectral<ImageUInt8> image )
public static void function( Planar<GrayU8> image )
{
{
int pixel = image.getBand(0).get(5,23);
int pixel = image.getBand(0).get(5,23);
image.getBand(0).set(5,23,50);
image.getBand(0).set(5,23,50);
</pre>
</syntaxhighlight>
MultiSpectral images are essentially arrays of ImageSingleBands.  To set or get a pixel value first access the particular band that needs to be changed then use the standard accessors inside of ImageSingleBand.
Planar images are essentially arrays of ImageGray.  To set or get a pixel value first access the particular band that needs to be changed then use the standard accessors inside of ImageSingleBand.


== Filters ==
== Filters ==


<pre>
<syntaxhighlight lang="java">
public static void procedural( ImageUInt8 input )
public static void procedural( GrayU8 input )
{
{
ImageUInt8 blurred = new ImageUInt8(input.width,input.height);
GrayU8 blurred = new GrayU8(input.width,input.height);
BlurImageOps.gaussian(input,blurred,-1,blurRadius,null);
BlurImageOps.gaussian(input,blurred,-1,blurRadius,null);
</pre>
</syntaxhighlight>
Applies Gaussian blur to an image using a type specific procedural interface.
Applies Gaussian blur to an image using a type specific procedural interface.


<pre>
<syntaxhighlight lang="java">
public static <T extends ImageSingleBand, D extends ImageSingleBand>
public static <T extends ImageGray, D extends ImageGray>
void generalized( T input )
void generalized( T input )
{
{
Class<T> inputType = (Class<T>)input.getClass();
Class<T> inputType = (Class<T>)input.getClass();


T blurred = GeneralizedImageOps.createImage(inputType,input.width, input.height);
T blurred = GeneralizedImageOps.createSingleBand(inputType,input.width, input.height);
GBlurImageOps.gaussian(input, blurred, -1, blurRadius, null);
GBlurImageOps.gaussian(input, blurred, -1, blurRadius, null);
</pre>
</syntaxhighlight>
Applies Gaussian blur to an image using an abstracted procedural interface.  Note the G in front of BlurImageOps that indicates
Applies Gaussian blur to an image using an abstracted procedural interface.  Note the G in front of BlurImageOps that indicates
it contains generic functions.  
it contains generic functions.  


<pre>
<syntaxhighlight lang="java">
public static <T extends ImageSingleBand, D extends ImageSingleBand>
public static <T extends ImageGray, D extends ImageGray>
void filter( T input )
void filter( T input )
{
{
Class<T> inputType = (Class<T>)input.getClass();
Class<T> inputType = (Class<T>)input.getClass();
T blurred = GeneralizedImageOps.createImage(inputType, input.width, input.height);
T blurred = GeneralizedImageOps.createSingleBand(inputType, input.width, input.height);
BlurFilter<T> filterBlur = FactoryBlurFilter.gaussian(inputType, -1, blurRadius);
BlurFilter<T> filterBlur = FactoryBlurFilter.gaussian(inputType, -1, blurRadius);
filterBlur.process(input,blurred);
filterBlur.process(input,blurred);
</pre>
</syntaxhighlight>
Creates an image filter class for computing the Gaussian blur.  Provides greater abstraction.
Creates an image filter class for computing the Gaussian blur.  Provides greater abstraction.


<pre>
<syntaxhighlight lang="java">
// type specific sobel
// type specific sobel
GradientSobel.process(blurred, derivX, derivY, FactoryImageBorder.extend(input));
GradientSobel.process(blurred, derivX, derivY, FactoryImageBorder.extend(input));
Line 166: Line 181:
ImageGradient<T,D> gradient = FactoryDerivative.sobel(inputType, derivType);
ImageGradient<T,D> gradient = FactoryDerivative.sobel(inputType, derivType);
gradient.process(blurred,derivX,derivY);
gradient.process(blurred,derivX,derivY);
</pre>
</syntaxhighlight>
Three ways to compute the image gradient using a Sobel kernel.
Three ways to compute the image gradient using a Sobel kernel.


<pre>
<syntaxhighlight lang="java">
public static <T extends ImageSingleBand, D extends ImageSingleBand>
public static <T extends ImageGray, D extends ImageGray>
void example( T input , Class<D> derivType ) {
void example( T input , Class<D> derivType ) {
AnyImageDerivative<T,D> deriv = GImageDerivativeOps.createDerivatives((Class<T>)input.getClass(),derivType);
AnyImageDerivative<T,D> deriv = GImageDerivativeOps.createDerivatives((Class<T>)input.getClass(),derivType);
Line 177: Line 192:
D derivX = deriv.getDerivative(true);
D derivX = deriv.getDerivative(true);
D derivXXY = deriv.getDerivative(true,true,false);
D derivXXY = deriv.getDerivative(true,true,false);
</pre>
</syntaxhighlight>
Useful class for computing arbitrary image derivatives.  Computes 1st order x-derive and then 3rd order xxy derivative.
Useful class for computing arbitrary image derivatives.  Computes 1st order x-derive and then 3rd order xxy derivative.


== Binary Images ==
== Binary Images ==


<pre>
<syntaxhighlight lang="java">
ThresholdImageOps.threshold(image, binary, 23, true);
ThresholdImageOps.threshold(image, binary, 23, true);
</pre>
</syntaxhighlight>
Creates a binary image by thresholding the input image.  Binary must be of type ImageUInt8.
Creates a binary image by thresholding the input image.  Binary must be of type GrayU8.


<pre>
<syntaxhighlight lang="java">
binary = BinaryImageOps.erode8(binary,null);
binary = BinaryImageOps.erode8(binary, 1, null);
</pre>
</syntaxhighlight>
Apply an erode operation on the binary image, writing over the original image reference.
Apply an erode operation once on the binary image, writing over the original image reference.


<pre>
<syntaxhighlight lang="java">
BinaryImageOps.erode8(binary,output);
BinaryImageOps.erode8(binary, 1, output);
</pre>
</syntaxhighlight>
Apply an erode operation on the binary image, saving results to the output binary image.
Apply an erode operation once on the binary image, saving results to the output binary image.


<pre>
<syntaxhighlight lang="java">
BinaryImageOps.erode4(binary,output);
BinaryImageOps.erode4(binary, 1, output);
</pre>
</syntaxhighlight>
Apply an erode operation with a 4-connect rule.
Apply an erode operation once with a 4-connect rule.


<pre>
<syntaxhighlight lang="java">
int numBlobs = BinaryImageOps.labelBlobs4(binary,blobs);
int numBlobs = BinaryImageOps.contour(binary, ConnectRule.FOUR, blobs).size();
</pre>
</syntaxhighlight>
Detect and label blobs in the binary image using a 4-connect rule.  blobs is an image of type ImageSInt32.
Detect and label blobs in the binary image using a 4-connect rule.  blobs is an image of type GrayS32.


<pre>
<syntaxhighlight lang="java">
BufferedImage visualized = VisualizeBinaryData.renderLabeled(blobs, numBlobs, null);
BufferedImage visualized = VisualizeBinaryData.renderLabeled(blobs, numBlobs, null);
</pre>
</syntaxhighlight>
Renders the detected blobs in a colored image.
Renders the detected blobs in a colored image.


<pre>
<syntaxhighlight lang="java">
BufferedImage visualized = VisualizeBinaryData.renderBinary(binary,null);
BufferedImage visualized = VisualizeBinaryData.renderBinary(binary, false, null);
</pre>
</syntaxhighlight>
Renders the binary image as a black white image.
Renders the binary image as a black white image. false means the colors are not inverted.
 
== Suggested Hardware ==
 
To do computer vision you need a camera.  Here's a list of recommended cameras
 
* [http://amzn.to/2igJ9GC Webcam: Logitech C920]
* [http://amzn.to/2hWGohn 360 Camera: Theta S ]
 
Webcams are great for basically everything but structure from motion (SFM) applications.  Their images often look better than much more expensive scientific cameras.  Unfortunately they have a rolling shutter which breaks SFM algorithms if anything in the scene or the camera is moving.
 
The Theta S is a 360 camera composed of two fisheye cameras.  Interestingly it is one of the few consumer grade cameras to provide a global shutter!  Making it useful for SFM applications.
 
(The above links are Amazon affiliate.  If you do plan on purchasing one of those cameras please help finance BoofCV and click on those links.)

Latest revision as of 14:02, 21 April 2017

The following tutorial is intended to provide just enough information for you to quickly set up and start development with BoofCV. If you are not familiar with the Java programming language or its associated development tools, you must fix that first because BoofCV is written entirely in Java. It is highly recommended that you use a tool like Gradle to build your own project and have it download the jars for you. If you enjoy doing things the slow and tedious way we are there for you and provide all the jars.

Step One: Obtaining

The first step in using BoofCV is either adding it to your dependency list, downloading the precompiled Jars, or building it from Source.

Latest Official Release:

Step Two: Running Examples and Demonstrations

Examples are short pieces of code which are designed to be easy to understand and show you how to perform some task. Demonstrations are more complex applications which visualize different aspects of an algorithm. The code for a demonstration is not designed to be easy to learn from and can be quite complex due to its integration with a GUI.

Source Code:

The easiest way to run an example or demonstration is to launch their respective master applications. You can also load up the source code in your favorite IDE and run the applications directly.

cd boofcv
./gradlew examples
java -jar examples/examples.jar
./gradlew demonstrations
java -jar demonstrations/demonstrations.jar

HELP ME!!

Having trouble or have a suggestion? Post a message on the BoofCV message board! Don't worry it's a friendly place.

Quick Reference

The remainder of this tutorial is intended to act as a quick reference of low level image processing routines in BoofCV.

Term definition
single band The image supports only one color
floating point Image elements are of type float or double
unsigned Image elements can only be positive integers
signed Image elements can be either positive or negative integers
generics Allows strong typing in abstracted code. Introduced in Java 1.5. Click here.

The Basics

BoofCV supports 3 types of images; Gray (single band images), Planar (multi-band in a planar format), and Interleaved (traditional multi-band image format). The first two, gray and planar are fully supported while interleaved is partially supported. Gray and planar images are just easier to work with most of the time which is why they are fully supported. Interleaved is only supported where there is a performance advantage that was significant.


GrayU8 image = new GrayU8(100,150);

Creating an unsigned 8-bit integer single band image with width=100 and height=150.

GrayF32 image = new GrayF32(100,150);

Creating a floating point single band image with width=100 and height=150.

Planar<GrayU8> image = new Planar<GrayU8>(GrayU8.class,100,200,3);

Creates a color planar image with 3 bands using GrayU8 for each band.

GrayF32 image = UtilImageIO.loadImage("test.png",GrayF32.class);

Loads a single band image of type GrayF32 from a file.

public static <T extends ImageBase> T generic( Class<T> imageType ) {
	T image = UtilImageIO.loadImage("test.png",imageType);

Loads an image with the specified type inside a function that uses Java generics.

BufferedImage out = ConvertBufferedImage.convertTo(image,null);

Converts an image into a BufferedImage to provide better integration with Java2D (display/saving). Pixel values must be in the range of 0 to 255.

BufferedImage out = VisualizeImageData.grayMagnitude(derivX,null,-1);

Renders a signed single band image into a gray intensity image.

BufferedImage out = VisualizeImageData.colorizeSign(derivX,null,-1);

Renders a signed single band image into a color intensity image.

BufferedImage out = ConvertBufferedImage.convertTo(image,null);
ShowImages.showWindow(out,"Output");

Displays an image in a window using Java swing.

Pixel Access

The image type must be known to access pixel information. The following show how to access pixels for different image types. For more information on the image data structure and direct access to the raw data array see Tutorial Images for more details

public static void function( GrayF32 image )
{
	float pixel = image.get(5,23);
	image.set(5,23,50.3);

Gets and sets the pixel at (5,23). Note that set() and get() functions are image type specific. In other words, you can't access pixel without knowing the image type.

public static void function( GrayU8 image )
{
	int pixel = image.get(5,23);
	image.set(5,23,50);

Similar to the above example but for an 8-bit unsigned integer image. Note the image.get() returns 'int' and not 'byte'.

public static void function( GrayI image )
{
	int pixel = image.get(5,23);
	image.set(5,23,50);

In fact the same code will work for all integer images, except SInt64 which uses longs and not ints. Internally UInt8 stores its pixels as a byte array, but set() and get() return int because Java internally does not use bytes on the register.


public static void function( Planar<GrayU8> image )
{
	int pixel = image.getBand(0).get(5,23);
	image.getBand(0).set(5,23,50);

Planar images are essentially arrays of ImageGray. To set or get a pixel value first access the particular band that needs to be changed then use the standard accessors inside of ImageSingleBand.

Filters

public static void procedural( GrayU8 input )
{
	GrayU8 blurred = new GrayU8(input.width,input.height);
	BlurImageOps.gaussian(input,blurred,-1,blurRadius,null);

Applies Gaussian blur to an image using a type specific procedural interface.

public static <T extends ImageGray, D extends ImageGray>
void generalized( T input )
{
	Class<T> inputType = (Class<T>)input.getClass();

	T blurred = GeneralizedImageOps.createSingleBand(inputType,input.width, input.height);
	GBlurImageOps.gaussian(input, blurred, -1, blurRadius, null);

Applies Gaussian blur to an image using an abstracted procedural interface. Note the G in front of BlurImageOps that indicates it contains generic functions.

public static <T extends ImageGray, D extends ImageGray>
void filter( T input )
{
	Class<T> inputType = (Class<T>)input.getClass();
	T blurred = GeneralizedImageOps.createSingleBand(inputType, input.width, input.height);
	BlurFilter<T> filterBlur = FactoryBlurFilter.gaussian(inputType, -1, blurRadius);
	filterBlur.process(input,blurred);

Creates an image filter class for computing the Gaussian blur. Provides greater abstraction.

// type specific sobel
GradientSobel.process(blurred, derivX, derivY, FactoryImageBorder.extend(input));
// generic
GImageDerivativeOps.sobel(blurred, derivX, derivY, BorderType.EXTENDED);
// filter
ImageGradient<T,D> gradient = FactoryDerivative.sobel(inputType, derivType);
gradient.process(blurred,derivX,derivY);

Three ways to compute the image gradient using a Sobel kernel.

public static <T extends ImageGray, D extends ImageGray>
void example( T input , Class<D> derivType ) {
	AnyImageDerivative<T,D> deriv = GImageDerivativeOps.createDerivatives((Class<T>)input.getClass(),derivType);

	deriv.setInput(input);
	D derivX = deriv.getDerivative(true);
	D derivXXY = deriv.getDerivative(true,true,false);

Useful class for computing arbitrary image derivatives. Computes 1st order x-derive and then 3rd order xxy derivative.

Binary Images

ThresholdImageOps.threshold(image, binary, 23, true);

Creates a binary image by thresholding the input image. Binary must be of type GrayU8.

binary = BinaryImageOps.erode8(binary, 1, null);

Apply an erode operation once on the binary image, writing over the original image reference.

BinaryImageOps.erode8(binary, 1, output);

Apply an erode operation once on the binary image, saving results to the output binary image.

BinaryImageOps.erode4(binary, 1, output);

Apply an erode operation once with a 4-connect rule.

int numBlobs = BinaryImageOps.contour(binary, ConnectRule.FOUR, blobs).size();

Detect and label blobs in the binary image using a 4-connect rule. blobs is an image of type GrayS32.

BufferedImage visualized = VisualizeBinaryData.renderLabeled(blobs, numBlobs, null);

Renders the detected blobs in a colored image.

BufferedImage visualized = VisualizeBinaryData.renderBinary(binary, false, null);

Renders the binary image as a black white image. false means the colors are not inverted.

Suggested Hardware

To do computer vision you need a camera. Here's a list of recommended cameras

Webcams are great for basically everything but structure from motion (SFM) applications. Their images often look better than much more expensive scientific cameras. Unfortunately they have a rolling shutter which breaks SFM algorithms if anything in the scene or the camera is moving.

The Theta S is a 360 camera composed of two fisheye cameras. Interestingly it is one of the few consumer grade cameras to provide a global shutter! Making it useful for SFM applications.

(The above links are Amazon affiliate. If you do plan on purchasing one of those cameras please help finance BoofCV and click on those links.)