segmentation
¶skimage.segmentation.active_contour (image, snake) |
Active contour model. |
skimage.segmentation.clear_border (labels[, ...]) |
Clear objects connected to the label image border. |
skimage.segmentation.felzenszwalb (image[, ...]) |
Computes Felsenszwalb’s efficient graph based image segmentation. |
skimage.segmentation.find_boundaries (label_img) |
Return bool array where boundaries between labeled regions are True. |
skimage.segmentation.join_segmentations (s1, s2) |
Return the join of the two input segmentations. |
skimage.segmentation.mark_boundaries (image, ...) |
Return image with boundaries between labeled regions highlighted. |
skimage.segmentation.quickshift |
Segments image using quickshift clustering in Color-(x,y) space. |
skimage.segmentation.random_walker (data, labels) |
Random walker algorithm for segmentation from markers. |
skimage.segmentation.relabel_from_one (*args, ...) |
Deprecated function. Use relabel_sequential instead. |
skimage.segmentation.relabel_sequential (...) |
Relabel arbitrary labels to {offset, ... |
skimage.segmentation.slic (image[, ...]) |
Segments image using k-means clustering in Color-(x,y,z) space. |
skimage.segmentation.
active_contour
(image, snake, alpha=0.01, beta=0.1, w_line=0, w_edge=1, gamma=0.01, bc='periodic', max_px_move=1.0, max_iterations=2500, convergence=0.1)[source]¶Active contour model.
Active contours by fitting snakes to features of images. Supports single and multichannel 2D images. Snakes can be periodic (for segmentation) or have fixed and/or free ends.
Parameters: | image : (N, M) or (N, M, 3) ndarray
snake : (N, 2) ndarray
alpha : float, optional
beta : float, optional
w_line : float, optional
w_edge : float, optional
gamma : float, optional
bc : {‘periodic’, ‘free’, ‘fixed’}, optional
max_px_move : float, optional
max_iterations : int, optional
convergence: float, optional
|
---|---|
Returns: | snake : (N, 2) ndarray
|
References
[R349] | Kass, M.; Witkin, A.; Terzopoulos, D. “Snakes: Active contour models”. International Journal of Computer Vision 1 (4): 321 (1988). |
Examples
>>> from skimage.draw import circle_perimeter
>>> from skimage.filters import gaussian_filter
Create and smooth image:
>>> img = np.zeros((100, 100))
>>> rr, cc = circle_perimeter(35, 45, 25)
>>> img[rr, cc] = 1
>>> img = gaussian_filter(img, 2)
Initiliaze spline:
>>> s = np.linspace(0, 2*np.pi,100)
>>> init = 50*np.array([np.cos(s), np.sin(s)]).T+50
Fit spline to image:
>>> snake = active_contour(img, init, w_edge=0, w_line=1)
>>> dist = np.sqrt((45-snake[:, 0])**2 +(35-snake[:, 1])**2)
>>> int(np.mean(dist))
25
skimage.segmentation.
clear_border
(labels, buffer_size=0, bgval=0, in_place=False)[source]¶Clear objects connected to the label image border.
The changes will be applied directly to the input.
Parameters: | labels : (N, M) array of int
buffer_size : int, optional
bgval : float or int, optional
in_place : bool, optional
|
---|---|
Returns: | labels : (N, M) array
|
Examples
>>> import numpy as np
>>> from skimage.segmentation import clear_border
>>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 1, 0],
... [0, 0, 0, 0, 1, 0, 0, 0, 0],
... [1, 0, 0, 1, 0, 1, 0, 0, 0],
... [0, 0, 1, 1, 1, 1, 1, 0, 0],
... [0, 1, 1, 1, 1, 1, 1, 1, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> clear_border(labels)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]])
skimage.segmentation.
felzenszwalb
(image, scale=1, sigma=0.8, min_size=20)[source]¶Computes Felsenszwalb’s efficient graph based image segmentation.
Produces an oversegmentation of a multichannel (i.e. RGB) image
using a fast, minimum spanning tree based clustering on the image grid.
The parameter scale
sets an observation level. Higher scale means
less and larger segments. sigma
is the diameter of a Gaussian kernel,
used for smoothing the image prior to segmentation.
The number of produced segments as well as their size can only be
controlled indirectly through scale
. Segment size within an image can
vary greatly depending on local contrast.
For RGB images, the algorithm computes a separate segmentation for each channel and then combines these. The combined segmentation is the intersection of the separate segmentations on the color channels.
Parameters: | image : (width, height, 3) or (width, height) ndarray
scale : float
sigma : float
min_size : int
|
---|---|
Returns: | segment_mask : (width, height) ndarray
|
References
[R350] | Efficient graph-based image segmentation, Felzenszwalb, P.F. and Huttenlocher, D.P. International Journal of Computer Vision, 2004 |
skimage.segmentation.
find_boundaries
(label_img, connectivity=1, mode='thick', background=0)[source]¶Return bool array where boundaries between labeled regions are True.
Parameters: | label_img : array of int or bool
connectivity: int in {1, ..., `label_img.ndim`}, optional
mode: string in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}
background: int, optional
|
---|---|
Returns: | boundaries : array of bool, same shape as label_img
|
Examples
>>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)
>>> find_boundaries(labels, mode='thick').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='inner').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='outer').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> labels_small = labels[::2, ::3]
>>> labels_small
array([[0, 0, 0, 0],
[0, 0, 5, 0],
[0, 1, 5, 0],
[0, 0, 5, 0],
[0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels_small, mode='subpixel').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0],
[0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> bool_image = np.array([[False, False, False, False, False],
... [False, False, False, False, False],
... [False, False, True, True, True],
... [False, False, True, True, True],
... [False, False, True, True, True]], dtype=np.bool)
>>> find_boundaries(bool_image)
array([[False, False, False, False, False],
[False, False, True, True, True],
[False, True, True, True, True],
[False, True, True, False, False],
[False, True, True, False, False]], dtype=bool)
skimage.segmentation.
join_segmentations
(s1, s2)[source]¶Return the join of the two input segmentations.
The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2.
Parameters: | s1, s2 : numpy arrays
|
---|---|
Returns: | j : numpy array
|
Examples
>>> from skimage.segmentation import join_segmentations
>>> s1 = np.array([[0, 0, 1, 1],
... [0, 2, 1, 1],
... [2, 2, 2, 1]])
>>> s2 = np.array([[0, 1, 1, 0],
... [0, 1, 1, 0],
... [0, 1, 1, 1]])
>>> join_segmentations(s1, s2)
array([[0, 1, 3, 2],
[0, 5, 3, 2],
[4, 5, 5, 3]])
skimage.segmentation.
mark_boundaries
(image, label_img, color=(1, 1, 0), outline_color=None, mode='outer', background_label=0)[source]¶Return image with boundaries between labeled regions highlighted.
Parameters: | image : (M, N[, 3]) array
label_img : (M, N) array of int
color : length-3 sequence, optional
outline_color : length-3 sequence, optional
mode : string in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}, optional
background_label : int, optional
|
---|---|
Returns: | marked : (M, N, 3) array of float
|
See also
skimage.segmentation.
quickshift
()¶Segments image using quickshift clustering in Color-(x,y) space.
Produces an oversegmentation of the image using the quickshift mode-seeking algorithm.
Parameters: | image : (width, height, channels) ndarray
ratio : float, optional, between 0 and 1 (default 1).
kernel_size : float, optional (default 5)
max_dist : float, optional (default 10)
return_tree : bool, optional (default False)
sigma : float, optional (default 0)
convert2lab : bool, optional (default True)
random_seed : None (default) or int, optional
|
---|---|
Returns: | segment_mask : (width, height) ndarray
|
Notes
The authors advocate to convert the image to Lab color space prior to segmentation, though this is not strictly necessary. For this to work, the image must be given in RGB format.
References
[R351] | Quick shift and kernel methods for mode seeking, Vedaldi, A. and Soatto, S. European Conference on Computer Vision, 2008 |
skimage.segmentation.
random_walker
(data, labels, beta=130, mode='bf', tol=0.001, copy=True, multichannel=False, return_full_prob=False, spacing=None)[source]¶Random walker algorithm for segmentation from markers.
Random walker algorithm is implemented for gray-level or multichannel images.
Parameters: | data : array_like
labels : array of ints, of same shape as data without channels dimension
beta : float
mode : string, available options {‘cg_mg’, ‘cg’, ‘bf’}
tol : float
copy : bool
multichannel : bool, default False
return_full_prob : bool, default False
spacing : iterable of floats
|
---|---|
Returns: | output : ndarray
|
See also
skimage.morphology.watershed
Notes
Multichannel inputs are scaled with all channel data combined. Ensure all channels are separately normalized prior to running this algorithm.
The spacing argument is specifically for anisotropic datasets, where data points are spaced differently in one or more spatial dimensions. Anisotropic data is commonly encountered in medical imaging.
The algorithm was first proposed in Random walks for image segmentation, Leo Grady, IEEE Trans Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83.
The algorithm solves the diffusion equation at infinite times for sources placed on markers of each phase in turn. A pixel is labeled with the phase that has the greatest probability to diffuse first to the pixel.
The diffusion equation is solved by minimizing x.T L x for each phase, where L is the Laplacian of the weighted graph of the image, and x is the probability that a marker of the given phase arrives first at a pixel by diffusion (x=1 on markers of the phase, x=0 on the other markers, and the other coefficients are looked for). Each pixel is attributed the label for which it has a maximal value of x. The Laplacian L of the image is defined as:
- L_ii = d_i, the number of neighbors of pixel i (the degree of i)
- L_ij = -w_ij if i and j are adjacent pixels
The weight w_ij is a decreasing function of the norm of the local gradient. This ensures that diffusion is easier between pixels of similar values.
When the Laplacian is decomposed into blocks of marked and unmarked pixels:
L = M B.T
B A
with first indices corresponding to marked pixels, and then to unmarked pixels, minimizing x.T L x for one phase amount to solving:
A x = - B x_m
where x_m = 1 on markers of the given phase, and 0 on other markers. This linear system is solved in the algorithm using a direct method for small images, and an iterative method for larger images.
Examples
>>> np.random.seed(0)
>>> a = np.zeros((10, 10)) + 0.2 * np.random.rand(10, 10)
>>> a[5:8, 5:8] += 1
>>> b = np.zeros_like(a)
>>> b[3, 3] = 1 # Marker for first phase
>>> b[6, 6] = 2 # Marker for second phase
>>> random_walker(a, b)
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)
skimage.segmentation.
relabel_sequential
(label_field, offset=1)[source]¶Relabel arbitrary labels to {offset, ... offset + number_of_labels}.
This function also returns the forward map (mapping the original labels to the reduced labels) and the inverse map (mapping the reduced labels back to the original ones).
Parameters: | label_field : numpy array of int, arbitrary shape
offset : int, optional
|
---|---|
Returns: | relabeled : numpy array of int, same shape as label_field
forward_map : numpy array of int, shape
inverse_map : 1D numpy array of int, of length offset + number of labels
|
Notes
The label 0 is assumed to denote the background and is never remapped.
The forward map can be extremely big for some inputs, since its
length is given by the maximum of the label field. However, in most
situations, label_field.max()
is much smaller than
label_field.size
, and in these cases the forward map is
guaranteed to be smaller than either the input or output images.
Examples
>>> from skimage.segmentation import relabel_sequential
>>> label_field = np.array([1, 1, 5, 5, 8, 99, 42])
>>> relab, fw, inv = relabel_sequential(label_field)
>>> relab
array([1, 1, 2, 2, 3, 5, 4])
>>> fw
array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 5])
>>> inv
array([ 0, 1, 5, 8, 42, 99])
>>> (fw[label_field] == relab).all()
True
>>> (inv[relab] == label_field).all()
True
>>> relab, fw, inv = relabel_sequential(label_field, offset=5)
>>> relab
array([5, 5, 6, 6, 7, 9, 8])
skimage.segmentation.
slic
(image, n_segments=100, compactness=10.0, max_iter=10, sigma=0, spacing=None, multichannel=True, convert2lab=None, enforce_connectivity=True, min_size_factor=0.5, max_size_factor=3, slic_zero=False)[source]¶Segments image using k-means clustering in Color-(x,y,z) space.
Parameters: | image : 2D, 3D or 4D ndarray
n_segments : int, optional
compactness : float, optional
max_iter : int, optional
sigma : float or (3,) array-like of floats, optional
spacing : (3,) array-like of floats, optional
multichannel : bool, optional
convert2lab : bool, optional
enforce_connectivity: bool, optional
min_size_factor: float, optional
max_size_factor: float, optional
slic_zero: bool, optional
|
---|---|
Returns: | labels : 2D or 3D array
|
Raises: | ValueError
|
Notes
sigma=1
and spacing=[5, 1, 1]
, the effective sigma is [0.2, 1, 1]
. This
ensures sensible smoothing for anisotropic images.References
[R352] | Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk, SLIC Superpixels Compared to State-of-the-art Superpixel Methods, TPAMI, May 2012. |
[R353] | (1, 2) http://ivrg.epfl.ch/research/superpixels#SLICO |
Examples
>>> from skimage.segmentation import slic
>>> from skimage.data import astronaut
>>> img = astronaut()
>>> segments = slic(img, n_segments=100, compactness=10)
Increasing the compactness parameter yields more square regions:
>>> segments = slic(img, n_segments=100, compactness=20)