OpenCV
4.1.0
Open Source Computer Vision
|
Enumerations | |
enum | cv::InterpolationFlags { cv::INTER_NEAREST = 0, cv::INTER_LINEAR = 1, cv::INTER_CUBIC = 2, cv::INTER_AREA = 3, cv::INTER_LANCZOS4 = 4, cv::INTER_LINEAR_EXACT = 5, cv::INTER_MAX = 7, cv::WARP_FILL_OUTLIERS = 8, cv::WARP_INVERSE_MAP = 16 } |
interpolation algorithm More... | |
enum | cv::InterpolationMasks { cv::INTER_BITS = 5, cv::INTER_BITS2 = INTER_BITS * 2, cv::INTER_TAB_SIZE = 1 << INTER_BITS, cv::INTER_TAB_SIZE2 = INTER_TAB_SIZE * INTER_TAB_SIZE } |
enum | cv::WarpPolarMode { cv::WARP_POLAR_LINEAR = 0, cv::WARP_POLAR_LOG = 256 } |
Specify the polar mapping mode. More... | |
Functions | |
void | cv::convertMaps (InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, int dstmap1type, bool nninterpolation=false) |
Converts image transformation maps from one representation to another. | |
Mat | cv::getAffineTransform (const Point2f src[], const Point2f dst[]) |
Calculates an affine transform from three pairs of the corresponding points. | |
Mat | cv::getAffineTransform (InputArray src, InputArray dst) |
Mat | cv::getPerspectiveTransform (InputArray src, InputArray dst, int solveMethod=DECOMP_LU) |
Calculates a perspective transform from four pairs of the corresponding points. | |
Mat | cv::getPerspectiveTransform (const Point2f src[], const Point2f dst[], int solveMethod=DECOMP_LU) |
void | cv::getRectSubPix (InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType=-1) |
Retrieves a pixel rectangle from an image with sub-pixel accuracy. | |
Mat | cv::getRotationMatrix2D (Point2f center, double angle, double scale) |
Calculates an affine matrix of 2D rotation. | |
void | cv::invertAffineTransform (InputArray M, OutputArray iM) |
Inverts an affine transformation. | |
void | cv::linearPolar (InputArray src, OutputArray dst, Point2f center, double maxRadius, int flags) |
Remaps an image to polar coordinates space. | |
void | cv::logPolar (InputArray src, OutputArray dst, Point2f center, double M, int flags) |
Remaps an image to semilog-polar coordinates space. | |
void | cv::remap (InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar()) |
Applies a generic geometrical transformation to an image. | |
void | cv::resize (InputArray src, OutputArray dst, Size dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR) |
Resizes an image. | |
void | cv::warpAffine (InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar()) |
Applies an affine transformation to an image. | |
void | cv::warpPerspective (InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar()) |
Applies a perspective transformation to an image. | |
void | cv::warpPolar (InputArray src, OutputArray dst, Size dsize, Point2f center, double maxRadius, int flags) |
Remaps an image to polar or semilog-polar coordinates space. | |
The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel \((x, y)\) of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value:
\[\texttt{dst} (x,y)= \texttt{src} (f_x(x,y), f_y(x,y))\]
In case when you specify the forward mapping \(\left<g_x, g_y\right>: \texttt{src} \rightarrow \texttt{dst}\), the OpenCV functions first compute the corresponding inverse mapping \(\left<f_x, f_y\right>: \texttt{dst} \rightarrow \texttt{src}\) and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula:
CV_8S
or CV_32S
images. interpolation algorithm
Enumerator | |
---|---|
INTER_NEAREST |
nearest neighbor interpolation |
INTER_LINEAR |
bilinear interpolation |
INTER_CUBIC |
bicubic interpolation |
INTER_AREA |
resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method. |
INTER_LANCZOS4 |
Lanczos interpolation over 8x8 neighborhood |
INTER_LINEAR_EXACT |
Bit exact bilinear interpolation |
INTER_MAX |
mask for interpolation codes |
WARP_FILL_OUTLIERS |
flag, fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero |
WARP_INVERSE_MAP |
flag, inverse transformation For example, linearPolar or logPolar transforms:
|
enum cv::WarpPolarMode |
Specify the polar mapping mode.
Enumerator | |
---|---|
WARP_POLAR_LINEAR |
Remaps an image to/from polar space. |
WARP_POLAR_LOG |
Remaps an image to/from semilog-polar space. |
void cv::convertMaps | ( | InputArray | map1, |
InputArray | map2, | ||
OutputArray | dstmap1, | ||
OutputArray | dstmap2, | ||
int | dstmap1type, | ||
bool | nninterpolation = false |
||
) |
Converts image transformation maps from one representation to another.
The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type()) \(\rightarrow\) (dstmap1.type(), dstmap2.type()) ) are supported:
map1 | The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 . |
map2 | The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively. |
dstmap1 | The first output map that has the type dstmap1type and the same size as src . |
dstmap2 | The second output map. |
dstmap1type | Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 . |
nninterpolation | Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation. |
Mat cv::getAffineTransform | ( | const Point2f | src[], |
const Point2f | dst[] | ||
) |
Calculates an affine transform from three pairs of the corresponding points.
The function calculates the \(2 \times 3\) matrix of an affine transform so that:
\[\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{map_matrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}\]
where
\[dst(i)=(x'_i,y'_i), src(i)=(x_i, y_i), i=0,1,2\]
src | Coordinates of triangle vertices in the source image. |
dst | Coordinates of the corresponding triangle vertices in the destination image. |
Mat cv::getAffineTransform | ( | InputArray | src, |
InputArray | dst | ||
) |
Mat cv::getPerspectiveTransform | ( | InputArray | src, |
InputArray | dst, | ||
int | solveMethod = DECOMP_LU |
||
) |
Calculates a perspective transform from four pairs of the corresponding points.
The function calculates the \(3 \times 3\) matrix of a perspective transform so that:
\[\begin{bmatrix} t_i x'_i \\ t_i y'_i \\ t_i \end{bmatrix} = \texttt{map_matrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}\]
where
\[dst(i)=(x'_i,y'_i), src(i)=(x_i, y_i), i=0,1,2,3\]
src | Coordinates of quadrangle vertices in the source image. |
dst | Coordinates of the corresponding quadrangle vertices in the destination image. |
solveMethod | method passed to cv::solve (DecompTypes) |
Mat cv::getPerspectiveTransform | ( | const Point2f | src[], |
const Point2f | dst[], | ||
int | solveMethod = DECOMP_LU |
||
) |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
void cv::getRectSubPix | ( | InputArray | image, |
Size | patchSize, | ||
Point2f | center, | ||
OutputArray | patch, | ||
int | patchType = -1 |
||
) |
Retrieves a pixel rectangle from an image with sub-pixel accuracy.
The function getRectSubPix extracts pixels from src:
\[patch(x, y) = src(x + \texttt{center.x} - ( \texttt{dst.cols} -1)*0.5, y + \texttt{center.y} - ( \texttt{dst.rows} -1)*0.5)\]
where the values of the pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multi-channel images is processed independently. Also the image should be a single channel or three channel image. While the center of the rectangle must be inside the image, parts of the rectangle may be outside.
image | Source image. |
patchSize | Size of the extracted patch. |
center | Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image. |
patch | Extracted patch that has the size patchSize and the same number of channels as src . |
patchType | Depth of the extracted pixels. By default, they have the same depth as src . |
Mat cv::getRotationMatrix2D | ( | Point2f | center, |
double | angle, | ||
double | scale | ||
) |
Calculates an affine matrix of 2D rotation.
The function calculates the following matrix:
\[\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} + (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}\]
where
\[\begin{array}{l} \alpha = \texttt{scale} \cdot \cos \texttt{angle} , \\ \beta = \texttt{scale} \cdot \sin \texttt{angle} \end{array}\]
The transformation maps the rotation center to itself. If this is not the target, adjust the shift.
center | Center of the rotation in the source image. |
angle | Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner). |
scale | Isotropic scale factor. |
void cv::invertAffineTransform | ( | InputArray | M, |
OutputArray | iM | ||
) |
Inverts an affine transformation.
The function computes an inverse affine transformation represented by \(2 \times 3\) matrix M:
\[\begin{bmatrix} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \end{bmatrix}\]
The result is also a \(2 \times 3\) matrix of the same type as M.
M | Original affine transformation. |
iM | Output reverse affine transformation. |
void cv::linearPolar | ( | InputArray | src, |
OutputArray | dst, | ||
Point2f | center, | ||
double | maxRadius, | ||
int | flags | ||
) |
Remaps an image to polar coordinates space.
void cv::logPolar | ( | InputArray | src, |
OutputArray | dst, | ||
Point2f | center, | ||
double | M, | ||
int | flags | ||
) |
Remaps an image to semilog-polar coordinates space.
void cv::remap | ( | InputArray | src, |
OutputArray | dst, | ||
InputArray | map1, | ||
InputArray | map2, | ||
int | interpolation, | ||
int | borderMode = BORDER_CONSTANT , |
||
const Scalar & | borderValue = Scalar() |
||
) |
Applies a generic geometrical transformation to an image.
The function remap transforms the source image using the specified map:
\[\texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))\]
where values of pixels with non-integer coordinates are computed using one of available interpolation methods. \(map_x\) and \(map_y\) can be encoded as separate floating-point maps in \(map_1\) and \(map_2\) respectively, or interleaved floating-point maps of \((x,y)\) in \(map_1\), or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (2x) remapping operations. In the converted case, \(map_1\) contains pairs (cvFloor(x), cvFloor(y)) and \(map_2\) contains indices in a table of interpolation coefficients.
This function cannot operate in-place.
src | Source image. |
dst | Destination image. It has the same size as map1 and the same type as src . |
map1 | The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1, or CV_32FC2. See convertMaps for details on converting a floating point representation to fixed-point for speed. |
map2 | The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. |
interpolation | Interpolation method (see InterpolationFlags). The method INTER_AREA is not supported by this function. |
borderMode | Pixel extrapolation method (see BorderTypes). When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. |
borderValue | Value used in case of a constant border. By default, it is 0. |
void cv::resize | ( | InputArray | src, |
OutputArray | dst, | ||
Size | dsize, | ||
double | fx = 0 , |
||
double | fy = 0 , |
||
int | interpolation = INTER_LINEAR |
||
) |
Resizes an image.
The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src
,dsize
,fx
, and fy
. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:
If you want to decimate the image by factor of 2 in each direction, you can call the function this way:
To shrink an image, it will generally look best with INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c::INTER_CUBIC (slow) or INTER_LINEAR (faster but still looks OK).
src | input image. |
dst | output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src. |
dsize | output image size; if it equals zero, it is computed as: \[\texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}\] Either dsize or both fx and fy must be non-zero. |
fx | scale factor along the horizontal axis; when it equals 0, it is computed as \[\texttt{(double)dsize.width/src.cols}\] |
fy | scale factor along the vertical axis; when it equals 0, it is computed as \[\texttt{(double)dsize.height/src.rows}\] |
interpolation | interpolation method, see InterpolationFlags |
void cv::warpAffine | ( | InputArray | src, |
OutputArray | dst, | ||
InputArray | M, | ||
Size | dsize, | ||
int | flags = INTER_LINEAR , |
||
int | borderMode = BORDER_CONSTANT , |
||
const Scalar & | borderValue = Scalar() |
||
) |
Applies an affine transformation to an image.
The function warpAffine transforms the source image using the specified matrix:
\[\texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})\]
when the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.
src | input image. |
dst | output image that has the size dsize and the same type as src . |
M | \(2\times 3\) transformation matrix. |
dsize | size of the output image. |
flags | combination of interpolation methods (see InterpolationFlags) and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation ( \(\texttt{dst}\rightarrow\texttt{src}\) ). |
borderMode | pixel extrapolation method (see BorderTypes); when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. |
borderValue | value used in case of a constant border; by default, it is 0. |
void cv::warpPerspective | ( | InputArray | src, |
OutputArray | dst, | ||
InputArray | M, | ||
Size | dsize, | ||
int | flags = INTER_LINEAR , |
||
int | borderMode = BORDER_CONSTANT , |
||
const Scalar & | borderValue = Scalar() |
||
) |
Applies a perspective transformation to an image.
The function warpPerspective transforms the source image using the specified matrix:
\[\texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )\]
when the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.
src | input image. |
dst | output image that has the size dsize and the same type as src . |
M | \(3\times 3\) transformation matrix. |
dsize | size of the output image. |
flags | combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation ( \(\texttt{dst}\rightarrow\texttt{src}\) ). |
borderMode | pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE). |
borderValue | value used in case of a constant border; by default, it equals 0. |
void cv::warpPolar | ( | InputArray | src, |
OutputArray | dst, | ||
Size | dsize, | ||
Point2f | center, | ||
double | maxRadius, | ||
int | flags | ||
) |
Remaps an image to polar or semilog-polar coordinates space.
Transform the source image using the following transformation:
\[ dst(\rho , \phi ) = src(x,y) \]
where
\[ \begin{array}{l} \vec{I} = (x - center.x, \;y - center.y) \\ \phi = Kangle \cdot \texttt{angle} (\vec{I}) \\ \rho = \left\{\begin{matrix} Klin \cdot \texttt{magnitude} (\vec{I}) & default \\ Klog \cdot log_e(\texttt{magnitude} (\vec{I})) & if \; semilog \\ \end{matrix}\right. \end{array} \]
and
\[ \begin{array}{l} Kangle = dsize.height / 2\Pi \\ Klin = dsize.width / maxRadius \\ Klog = dsize.width / log_e(maxRadius) \\ \end{array} \]
Polar mapping can be linear or semi-log. Add one of WarpPolarMode to flags
to specify the polar mapping mode.
Linear is the default mode.
The semilog mapping emulates the human "foveal" vision that permit very high acuity on the line of sight (central vision) in contrast to peripheral vision where acuity is minor.
dsize <=0
(default), the destination image will have (almost) same area of source bounding circle: \[\begin{array}{l} dsize.area \leftarrow (maxRadius^2 \cdot \Pi) \\ dsize.width = \texttt{cvRound}(maxRadius) \\ dsize.height = \texttt{cvRound}(maxRadius \cdot \Pi) \\ \end{array}\]
dsize.height <= 0
, the destination image area will be proportional to the bounding circle area but scaled by Kx * Kx
: \[\begin{array}{l} dsize.height = \texttt{cvRound}(dsize.width \cdot \Pi) \\ \end{array} \]
dsize > 0
, the destination image will have the given size therefore the area of the bounding circle will be scaled to dsize
.You can get reverse mapping adding WARP_INVERSE_MAP to flags
In addiction, to calculate the original coordinate from a polar mapped coordinate \((rho, phi)->(x, y)\):
src | Source image. |
dst | Destination image. It will have same type as src. |
dsize | The destination image size (see description for valid options). |
center | The transformation center. |
maxRadius | The radius of the bounding circle to transform. It determines the inverse magnitude scale parameter too. |
flags | A combination of interpolation methods, InterpolationFlags + WarpPolarMode.
|