Difference between revisions of "Tutorial Image Segmentation"

From BoofCV
Jump to navigationJump to search
m
m
Line 12: Line 12:
'''Up to date as of BoofCV v0.17'''
'''Up to date as of BoofCV v0.17'''


== Thresholding ==
= Thresholding =


<center>
<center>
Line 23: Line 23:
The most basic way to threshold a gray scale image is by thresholding.  Thresholding works b yby assigning 1 to all pixels below/above an intensity value to 1 and the rest 0.  Typically the pixels which are assigned a value of 1 are an object of interest. BoofCV provides both global and adaptive (local) thresholding capabilities.  The images above demonstrate a global threshold being used on a TEM image with particles.  
The most basic way to threshold a gray scale image is by thresholding.  Thresholding works b yby assigning 1 to all pixels below/above an intensity value to 1 and the rest 0.  Typically the pixels which are assigned a value of 1 are an object of interest. BoofCV provides both global and adaptive (local) thresholding capabilities.  The images above demonstrate a global threshold being used on a TEM image with particles.  


Global thresholding is extremely fast, but it can be unclear what a good threshold is.  BoofCV provides several methods for automatically computing global thresholds based on different theories.  Otsu works by minimizing the spread of background and foreground pixels.  Entropy maximizes the entropy between the foreground and background regions.  The image's mean can also be quickly computed. The
Global thresholding is extremely fast, but it can be unclear what a good threshold is.  BoofCV provides several methods for automatically computing global thresholds based on different theories.  Otsu works by minimizing the spread of background and foreground pixels.  Entropy maximizes the entropy between the foreground and background regions.  The image's mean can also be quickly computed. The code below demonstrates how to compute and apply different global thresholds.
code below computes and applies an Otsu global threshold.


<syntaxhighlight lang="java">
<syntaxhighlight lang="java">
// mean
GThresholdImageOps.threshold(input, binary, ImageStatistics.mean(input), true);
// Otsu
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 256), true);
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 256), true);
// Entropy
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeEntropy(input, 0, 256), true);
</syntaxhighlight>
</syntaxhighlight>
The first two parameters is the input gray scale image and the output binary image.  The 3rd parameter is the threshold.  The last parameter is true if it should threshold down.  If down is true then all pixels with a value less than or equal to the threshold are set to 1.


<center>
<center>
Line 37: Line 43:


A global threshold across the whole image breaks down if the lighting conditions vary within the image.  This is demonstrated in the figure above.  A calibration grid is the object of interest and spotlight affect can be seen.  When a global Otsu threshold (center) is applied many of the squares are lost.  Instead an adaptive threshold using a square region (right) is used and the squares are clearly visible.  Adaptive thresholds based on squares and Gaussian distributions work well as long as there is sufficient texture inside the local region.  If there is no texture then they produce noise.  The figure below shows a text sample with a complex background being binarized.  The adaptive square filter produces excessive noise, while the adaptive Sauvola produces a much cleaner image.
A global threshold across the whole image breaks down if the lighting conditions vary within the image.  This is demonstrated in the figure above.  A calibration grid is the object of interest and spotlight affect can be seen.  When a global Otsu threshold (center) is applied many of the squares are lost.  Instead an adaptive threshold using a square region (right) is used and the squares are clearly visible.  Adaptive thresholds based on squares and Gaussian distributions work well as long as there is sufficient texture inside the local region.  If there is no texture then they produce noise.  The figure below shows a text sample with a complex background being binarized.  The adaptive square filter produces excessive noise, while the adaptive Sauvola produces a much cleaner image.
Below is a code sniplet showing how an adaptive square region (radius=28, bias=0, down=true) is applied to an input image.
<syntaxhighlight lang="java">
// Adaptive using a square region
GThresholdImageOps.adaptiveSquare(input, binary, 28, 0, true, null, null);
// Adaptive using a square region with Gaussian weighting
GThresholdImageOps.adaptiveGaussian(input, binary, 42, 0, true, null, null);
// Adaptive Sauvola
GThresholdImageOps.adaptiveSauvola(input, binary, 5, 0.30f, true);
</syntaxhighlight>
See JavaDoc for a complete description of each of these functions.  null values indicate an optional input image for internal work space.  Supplying the work space can reduce overhead by creating/destroying memory.  When using Sauvola it can be faster to create ThresholdSauvola directory and reuse the class for each image which is threshold.  Sauvola requires several internal images for storage and creating all that memory can become expensive.


<center>
<center>
Line 44: Line 62:
</center>
</center>


After an image has been thresholded how do you extract objects from it?  This can be done by finding the contour  
After an image has been thresholded how do you extract objects from it?  This can be done by finding the contour around each binary blob.  In the code sniplet below the outer and inner contours for each blob is found.  In the output "label" image pixels which belong to the same blob are given a unique ID number.  The contour of blobs is critical to shape analysis.  A visualization of the labeled blobs is shown below.
<syntaxhighlight lang="java">
List<Contour> contours = BinaryImageOps.contour(filtered, ConnectRule.EIGHT, label);
</syntaxhighlight>


<center>
<center>
<gallery caption="Binary Object Extraction" heights=150 widths=200 >
<gallery caption="Binary Object Extraction" heights=150 widths=200 >
Image:example_binary_labeled.png|400px|Labeled Binary Image
Image:example_binary_labeled.png|Labeled Binary Image
</gallery>
</gallery>
</center>
</center>


How do you select a threshold?  Discuss...
=== Examples ===
 


What happens if there is no good global threshold for an image?  This
* [[Example_Binary_Image|Example Binary Image]]
* [[Applet_Binary_Operations Applet Binary Operations]]
* [https://github.com/lessthanoptimal/BoofCV/blob/master/examples/src/boofcv/examples/segmentation/ExampleThresholding.java Example Thresholding]


Show chessboard example for local
=== Thresholding API ===


Once segmented the next step is typically to identify individual blobs, which can be done using BLAH.
* [https://github.com/lessthanoptimal/BoofCV/blob/master/main/ip/src/boofcv/alg/filter/binary/GThresholdImageOps.java GThresholdImageOps]
* [https://github.com/lessthanoptimal/BoofCV/blob/master/main/ip/src/boofcv/alg/filter/binary/ThresholdImageOps.java ThresholdImageOps]
* [https://github.com/lessthanoptimal/BoofCV/blob/master/main/ip/src/boofcv/alg/filter/binary/BinaryImageOps.java BinaryImageOps]


== Color Histogram ==
= Color Histogram =


Next up is color based image segmentation.  In this example you can click on a pixel and it will find all pixels with a similar color.  This is done by convert the image into a color space which is independent of intensity, HSV, and then selecting pixels which are similar.
Next up is color based image segmentation.  In this example you can click on a pixel and it will find all pixels with a similar color.  This is done by convert the image into a color space which is independent of intensity, HSV, and then selecting pixels which are similar.
Line 67: Line 91:
One application includes identifying a road for an autonomous vehicle to drive along.
One application includes identifying a road for an autonomous vehicle to drive along.


== Superpixels ==
= Superpixels =


Superpixels have been gaining popularity recently.  BoofCV contains implementations of several recently developed algorithms.
Superpixels have been gaining popularity recently.  BoofCV contains implementations of several recently developed algorithms.

Revision as of 01:07, 23 August 2014

Image Segmentation and Superpixels in BoofCV

This article provides an overview of image segmentation and superpixels in BoofCV. Image segmentation is a problem in which an image is partitioned into groups of related pixels. These pixel groups can then be used to identify objects and reduce the complexity of image processing. Superpixels are a more specific type of segmentation where the partitions form connected clusters.

The following topics will be covered:

  • Thresholding
  • Color histogram
  • Superpixels

For a list of code examples see GitHub

Up to date as of BoofCV v0.17

Thresholding

The most basic way to threshold a gray scale image is by thresholding. Thresholding works b yby assigning 1 to all pixels below/above an intensity value to 1 and the rest 0. Typically the pixels which are assigned a value of 1 are an object of interest. BoofCV provides both global and adaptive (local) thresholding capabilities. The images above demonstrate a global threshold being used on a TEM image with particles.

Global thresholding is extremely fast, but it can be unclear what a good threshold is. BoofCV provides several methods for automatically computing global thresholds based on different theories. Otsu works by minimizing the spread of background and foreground pixels. Entropy maximizes the entropy between the foreground and background regions. The image's mean can also be quickly computed. The code below demonstrates how to compute and apply different global thresholds.

// mean
GThresholdImageOps.threshold(input, binary, ImageStatistics.mean(input), true);
// Otsu
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 256), true);
// Entropy
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeEntropy(input, 0, 256), true);

The first two parameters is the input gray scale image and the output binary image. The 3rd parameter is the threshold. The last parameter is true if it should threshold down. If down is true then all pixels with a value less than or equal to the threshold are set to 1.

A global threshold across the whole image breaks down if the lighting conditions vary within the image. This is demonstrated in the figure above. A calibration grid is the object of interest and spotlight affect can be seen. When a global Otsu threshold (center) is applied many of the squares are lost. Instead an adaptive threshold using a square region (right) is used and the squares are clearly visible. Adaptive thresholds based on squares and Gaussian distributions work well as long as there is sufficient texture inside the local region. If there is no texture then they produce noise. The figure below shows a text sample with a complex background being binarized. The adaptive square filter produces excessive noise, while the adaptive Sauvola produces a much cleaner image.

Below is a code sniplet showing how an adaptive square region (radius=28, bias=0, down=true) is applied to an input image.

// Adaptive using a square region
GThresholdImageOps.adaptiveSquare(input, binary, 28, 0, true, null, null);
// Adaptive using a square region with Gaussian weighting
GThresholdImageOps.adaptiveGaussian(input, binary, 42, 0, true, null, null);
// Adaptive Sauvola
GThresholdImageOps.adaptiveSauvola(input, binary, 5, 0.30f, true);

See JavaDoc for a complete description of each of these functions. null values indicate an optional input image for internal work space. Supplying the work space can reduce overhead by creating/destroying memory. When using Sauvola it can be faster to create ThresholdSauvola directory and reuse the class for each image which is threshold. Sauvola requires several internal images for storage and creating all that memory can become expensive.

After an image has been thresholded how do you extract objects from it? This can be done by finding the contour around each binary blob. In the code sniplet below the outer and inner contours for each blob is found. In the output "label" image pixels which belong to the same blob are given a unique ID number. The contour of blobs is critical to shape analysis. A visualization of the labeled blobs is shown below.

List<Contour> contours = BinaryImageOps.contour(filtered, ConnectRule.EIGHT, label);

Examples

Thresholding API

Color Histogram

Next up is color based image segmentation. In this example you can click on a pixel and it will find all pixels with a similar color. This is done by convert the image into a color space which is independent of intensity, HSV, and then selecting pixels which are similar.

One application includes identifying a road for an autonomous vehicle to drive along.

Superpixels

Superpixels have been gaining popularity recently. BoofCV contains implementations of several recently developed algorithms.

Mean-Shift SLIC Felzenszwalb-Huttenlocher Watershed

Which one is the best is highly application dependent. Here is application included with BoofCV which allows you to experiment and play with different parameters.

(go through a few examples)