5.8. OpenCV Filters#

Summary:

Wrapped algorithms from OpenCV

Type:

Algorithm

License:

Licensed under LGPL.

Platforms:

Windows, Linux

Author:

W. Lyda, M. Gronle, J. Krauter, ITO, Universität Stuttgart

5.8.1. Overview#

This plugin provides wrappers for various OpenCV algorithms. These are for instance:

  • morphological filters (dilation, erosion)

  • image filtering (blur, median blur…)

  • 1d and 2d fft and ifft

  • histogram determination

  • feature detections (circles, chessboard corners…)

This plugin not only requires access to the core library of OpenCV but also to further libraries like imgproc and calib3d.

This plugin has been created at a time when OpenCV did not yet provide bindings for Python 3. From OpenCV 3 on, these bindings exist. Therefore, it is possible to access almost all OpenCV methods via the cv2 python package. The wrapped methods within this plugin can still be used; In addition to the cv2 methods, they can sometimes operate on multi-plane dataObjects, preserve the tags and meta information and save protocol data.

These filters are defined in the plugin:

  1. cvBilateralFilter()

  2. cvBlur()

  3. cvCalibrateCamera()

  4. cvCannyEdge()

  5. cvComputeCorrespondEpilines()

  6. cvCornerSubPix()

  7. cvCvtColor()

  8. cvDilate()

  9. cvDrawChessboardCorners()

  10. cvDrawKeypoints()

  11. cvDrawMatcher()

  12. cvErode()

  13. cvEstimateAffine3DParams()

  14. cvFFT1D()

  15. cvFFT2D()

  16. cvFindChessboardCorners()

  17. cvFindCircles()

  18. cvFindFundamentalMat()

  19. cvFindHomography()

  20. cvFlannBasedMatcher()

  21. cvFlipLeftRight()

  22. cvFlipUpDown()

  23. cvGetRotationMatrix2D()

  24. cvIFFT1D()

  25. cvIFFT2D()

  26. cvInitUndistortRectifyMap()

  27. cvMedianBlur()

  28. cvMergeChannels()

  29. cvMorphologyEx()

  30. cvProjectPoints()

  31. cvRemap()

  32. cvRemoveSpikes()

  33. cvResize()

  34. cvRotate180()

  35. cvRotateM90()

  36. cvRotateP90()

  37. cvSplitChannels()

  38. cvThreshold()

  39. cvUndistort()

  40. cvUndistortPoints()

  41. cvWarpAffine()

  42. cvWarpPerspective()

5.8.2. Filters#

Detailed overview about all defined filters:

itom.algorithms.cvBilateralFilter(inputObject, outputObject, diameter, sigmaColor, sigmaSpace[, borderType])#

Resizes an image

The function resize resizes the image ‘inputObject’ down to or up by the specific factors.

To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK). The axisScale properties of the x- and y-axes of the outputObject are divided by fx and fy respectively, while the offset values are multiplied with fx and fy.

Parameters:
  • inputObject (itom.dataObject) – input image (8-bit or floating-point, 1-Channel or 3-Channel)

  • outputObject (itom.dataObject) – output image, will have the same type and size than inputObject.

  • diameter (int) –

    diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace..

    All values allowed, Default: 1

  • sigmaColor (float) –

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

    Value range: [1e-06, inf], Default: 1

  • sigmaSpace (float) –

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When diameter>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, diameter is proportional to sigmaSpace..

    Value range: [1e-06, inf], Default: 1

  • borderType (int, optional) –

    border mode used to extrapolate pixels outside of the image. The following values are possible:

    BORDER_CONSTANT (0) (iiiiii|abcdefgh|iiiiiii with some specified i) BORDER_REPLICATE (1) (aaaaaa|abcdefgh|hhhhhhh) BORDER_REFLECT (2) (fedcba|abcdefgh|hgfedcb) BORDER_WRAP (3) (cdefgh|abcdefgh|abcdefg) BORDER_TRANSPARENT (4) (gfedcb|abcdefgh|gfedcba) BORDER_ISOLATED (4) (do not look outside of ROI)

    Value range: [0, 4], Default: 4

itom.algorithms.cvBlur(sourceImage, destinationImage[, kernelSizeX, kernelSizeY, anchor, borderType])#

Planewise median blur filter.

This filter applies the method cv::blur to every plane in the source data object. The function smoothes the images by a simple mean-filter. Theresult is contained in the destination object. It can handle data objects of type uint8, uint16, int16, ito::tInt32, float32 and float64 only.

The cv::blur internally calls the cv::boxfilter()-method.

The itom-wrapping does not work inplace currently. A new dataObject is allocated.

borderType: This string defines how the filter should handle pixels at the border of the matrix.Allowed is CONSTANT [default], REPLICATE, REFLECT, WRAP, REFLECT_101. In case of a constant border, only pixels inside of the element mask are considered (morphologyDefaultBorderValue) Warning: NaN-handling for floats not verified.

Parameters:
  • sourceImage (itom.dataObject) – All types except complex64 and complex128 are accepted

  • destinationImage (itom.dataObject) – Empty object handle. Image will be of src-type

  • kernelSizeX (int, optional) –

    Kernelsize for x-axis

    Value range: [1, 255], Default: 3

  • kernelSizeY (int, optional) –

    Kernelsize for y-axis

    Value range: [1, 255], Default: 3

  • anchor (itom.dataObject, optional) – Position of the kernel anchor, see openCV-Help

  • borderType (str, optional) – border mode used to extrapolate pixels outside of the image

itom.algorithms.cvCalibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs[, flags, maxCounts, epsilonAccuracy])#

Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

The function estimates the intrinsic camera parameters and extrinsic parameters for each of the views. The coordinates of 3D object points and their corresponding 2D projections in each view must be specified. That may be achieved by using an object with a known geometry and easily detectable feature points. Such an object is called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as a calibration rig (see cvFindChessboardCorners()). Currently, initialization of intrinsic parameters (when CV_CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also be used as long as initial cameraMatrix is provided.

The algorithm performs the following steps:

  1. Compute the initial intrinsic parameters (the option only available for planar calibration patterns) or read them from the input parameters. The distortion coefficients are all set to zeros initially unless some of CV_CALIB_FIX_K? are specified.

  2. Estimate the initial camera pose as if the intrinsic parameters have been already known. This is done using solvePnP() .

  3. Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error, that is, the total sum of squared distances between the observed feature points imagePoints and the projected (using the current estimates for camera parameters and the poses) object points objectPoints. See projectPoints() for details.

If the reprojectionError is NaN, one or both of the matrices objectPoints or imagePoints probability contains any NaN-value after truncation. Remember that this algorithm truncates objectPoints and imagePoints before using it in the way that for each view, the last rows are cut where either the value in the first column of objectPoints or imagePoints is non-finite.

Parameters:
  • objectPoints (itom.dataObject) – [NrOfViews x NrOfPoints x 3] float32 matrix with the coordinates of all points in object space (coordinate system of calibration pattern).. Non-finite rows at the end of each matrix-plane will be truncated.

  • imagePoints (itom.dataObject) – [NrOfViews x NrOfPoints x 2] float32 matrix with the pixel coordinates (u,v) of the corresponding plane in each view. Non-finite rows at the end of each matrix-plane will be truncated.

  • imageSize (Sequence[int]) – [height,width] of the camera image (in pixels)

  • cameraMatrix (itom.dataObject) – Output 3x3 float64 camera patrix. If flags CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, this matrix must be initialized with right values and is unchanged

  • distCoeffs (itom.dataObject) – Output 1x4, 1x5 or 1x8 distortion values (float64). (k1, k2, p1, p2 [,k3 [,k4 ,k5 ,k6]])

  • rvecs (itom.dataObject) – 3 x NrOfViews float64 output vector, where each column is the rotation vector estimated for each pattern view (Rodrigues coordinates)

  • tvecs (itom.dataObject) – 3 x NrOfViews float64 output vector, where each column is the translation vector estimated for each pattern view

  • flags (int, optional) –

    Different flags that may be a combination of the following values: CV_CALIB_USE_INTRINSIC_GUESS (1), CV_CALIB_FIX_PRINCIPAL_POINT (4), CV_CALIB_FIX_ASPECT_RATIO (2), CV_CALIB_ZERO_TANGENT_DIST (8), CV_CALIB_FIX_K1 (32), CV_CALIB_FIX_K2 (64), CV_CALIB_FIX_K3 (128), CV_CALIB_FIX_K4 (2048), CV_CALIB_FIX_K5 (4096), CV_CALIB_FIX_K6 (8192), CV_CALIB_RATIONAL_MODEL (16384)

    Value range: [0, 30959], Default: 0

  • maxCounts (int, optional) –

    if > 0, maximum number of counts, 0: unlimited number of counts allowed [default: 30]

    Value range: [0, inf], Default: 30

  • epsilonAccuracy (float, optional) –

    if > 0.0, desired accuracy at which the iterative algorithm stops, 0.0: no epsilon criteria [default: DBL_EPSILON]

    Value range: [0, inf], Default: 2.22045e-16

Returns:

reprojectionError - resulting re-projection error

Return type:

float

itom.algorithms.cvCannyEdge(sourceImage, destinationImage[, lowThreshold, highThresholdRatio, kernelSize])#

Canny Edge detector using cv::DFT.

It’s just Canny’s edge filter

Parameters:
  • sourceImage (itom.dataObject) – Input Object handle, must be a single plane

  • destinationImage (itom.dataObject) – Output Object handle. Will be come complex-type

  • lowThreshold (float, optional) –

    Low Threshold

    Value range: [-1e+10, 1e+10], Default: 2

  • highThresholdRatio (float, optional) –

    Ratio between High Threshold and Low Threshold, Canny’s recommendation is three

    Value range: [0, 1e+10], Default: 3

  • kernelSize (int, optional) –

    Kernel size for Sobel filter, default is 3

    Value range: [3, 300], Default: 3

itom.algorithms.cvComputeCorrespondEpilines(points, whichImage, F, lines)#

For points in an image of a stereo pair, computes the corresponding epilines in the other image.

For every point in one of the two images of a stereo pair, the function finds the equation of the corresponding epipolar line in the other image.

From the fundamental matrix definition (see findFundamentalMat()), line l^{(2)}_i in the second image for the point p^{(1)}_i in the first image (when whichImage=1) is computed as:

l^{(2)}_i = F p^{(1)}_i

And vice versa, when whichImage=2, l^{(1)}_i is computed from p^{(2)}_i as:

l^{(1)}_i = F^T p^{(2)}_i

Line coefficients are defined up to a scale. They are normalized so that

a_i^2+b_i^2=1.

Parameters:
  • points (itom.dataObject) – coordinates of the image points in the one image, a matrix of type [Nx2], float32

  • whichImage (int) –

    Index of the image (1 or 2) that contains the points.

    Value range: [1, 2], Default: 1

  • F (itom.dataObject) – Fundamental matrix that can be estimated using cvFindFundamentalMat() or cvStereoRectify()

  • lines (itom.dataObject) – Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c)

itom.algorithms.cvCornerSubPix(image, corners, winSize[, zeroZone, maxCount, epsilon])#

Refines the corner locations e.g. from cvFindChessboardCorners.

This filter is a wrapper for the cv::method cv::cornerSubPix. Check the openCV-doku for more details

Parameters:
  • image (itom.dataObject) – 8bit grayscale input image

  • corners (itom.dataObject) – initial coordinates of the input corners and refined coordinates provided for output

  • winSize (Sequence[int]) – Half of the side length of the search window. Example: (5,5) leads to a (11x11) search window

  • zeroZone (Sequence[int], optional) – Half of the size of the dead region in the middle of the search zone over which the summation is not done. (-1,-1) indicates that there is no such a size

  • maxCount (int, optional) –

    position refinement stops after this maximum number of iterations

    Value range: [1, 100000], Default: 200

  • epsilon (float, optional) –

    position refinement stops when the corner position moves by less than this value

    Value range: [0, 10], Default: 0.05

itom.algorithms.cvCvtColor(sourceImage, destinationImage[, code, dstChan])#

Converts an image from one color space to another. In case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB -> L*u*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling cvtColor , you need first to scale the image down

The parameter code defines the conversion:

  • RGB <-> GRAY ( CV_BGR2GRAY = 6, CV_RGB2GRAY = 7 , CV_GRAY2BGR = 8, CV_GRAY2RGB = 8)

  • RGB <-> CIE XYZ.Rec 709 with D65 white point ( CV_BGR2XYZ = 32, CV_RGB2XYZ = 33, CV_XYZ2BGR = 34, CV_XYZ2RGB = 35)

  • RGB <-> YCrCb JPEG (or YCC) ( CV_BGR2YCrCb = 36, CV_RGB2YCrCb = 37, CV_YCrCb2BGR = 38, CV_YCrCb2RGB = 39)

  • RGB <-> HSV ( CV_BGR2HSV = 40, CV_RGB2HSV = 41, CV_HSV2BGR = 54, CV_HSV2RGB = 55 )

  • RGB <-> HLS ( CV_BGR2HLS = 52, CV_RGB2HLS = 53, CV_HLS2BGR = 60, CV_HLS2RGB = 61)

  • RGB <-> CIE L*a*b* ( CV_BGR2Lab = 44, CV_RGB2Lab = 45, CV_Lab2BGR = 56, CV_Lab2RGB = 57)

  • RGB <-> CIE L*u*v* ( CV_BGR2Luv = 50, CV_RGB2Luv = 51, CV_Luv2BGR = 58, CV_Luv2RGB = 59)

  • Bayer <-> RGB ( CV_BayerBG2BGR = 46, CV_BayerGB2BGR = 47, CV_BayerRG2BGR = 48, CV_BayerGR2BGR = 49, …

    CV_BayerBG2RGB = 48, CV_BayerGB2RGB = 49, CV_BayerRG2RGB = 46, CV_BayerGR2RGB = 47)

For more details see OpenCV documentation.

Parameters:
  • sourceImage (itom.dataObject) – Input Object handle, must be a single plane

  • destinationImage (itom.dataObject) – Output Object handle. Will be come complex-type

  • code (int, optional) –

    Transformation code, see (OpenCV) documentation

    Value range: [0, 65535], Default: 0

  • dstChan (int, optional) –

    number of color channels of destination image, for 0 the number of channels is derived from the transformation (default)

    Value range: [0, 5], Default: 0

itom.algorithms.cvDilate(sourceObj, destinationObj[, element, anchor, iterations, borderType])#

Dilates every plane of a data object by using a specific structuring element.

This filter applies the dialation method cvDilate of OpenCV to every plane in the source data object. The result is contained in the destination object. It can handle data objects of type uint8, uint16, int16, float32 and float64 only.

It is allowed to let the filter work inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated.

The dilation is executed using a structuring element which is (if not otherwise stated) a 3x3 kernel filled with ones. Else you can give an two-dimensional uint8 data object. Then, the function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

dst(x,y) = max_{(x’,y’):element(x’,y’)!=0} src(x+x’,y+y’)

Dilation can be applied several times (parameter ‘iterations’).

Parameters:
  • sourceObj (itom.dataObject) – input data object of type uint8, uint16, int16, float32, float64

  • destinationObj (itom.dataObject) – output image with the same type and size than input (inplace allowed)

  • element (itom.dataObject, optional) – structuring element used for the morpholocial operation (default: None, a 3x3 rectangular structuring element is used). Else: An uint8 data object where values > 0 are considered for the operation.

  • anchor (Sequence[int], optional) – position of the anchor within the element. If not given or if (-1,-1), the anchor is at the element center [default].

  • iterations (int, optional) –

    number of times the morpholocial operation is applied [default: 1]

    Value range: [1, 65000], Default: 1

  • borderType (str, optional) – This string defines how the filter should handle pixels at the border of the matrix. Allowed is CONSTANT [default], REPLICATE, REFLECT, WRAP, REFLECT_101. In case of a constant border, only pixels inside of the element mask are considered (morphologyDefaultBorderValue)

itom.algorithms.cvDrawChessboardCorners(image, patternSize, corners, patternWasFound)#

Renders the detected chessboard corners.

The function draws individual chessboard corners detected either as red circles if the board was not found, or as colored corners connected with lines if the board was found.

Parameters:
  • image (itom.dataObject) – rgba32 input and destination image (must be of type ito::rgba32).

  • patternSize (Sequence[int]) – Number of inner corners per chessboard row and column (points_per_row, points_per_column)

  • corners (itom.dataObject) – array of detected corners (n x 2), the output of cvFindChessboardCorners or cvCornerSubPix

  • patternWasFound (int) –

    Parameter indicating whether the complete board was found or not.

    Value range: [0, 1], Default: 1

itom.algorithms.cvDrawKeypoints(image, keypoints, outImage[, color, flags])#

Draws keypoints.

Parameters:
  • image (itom.dataObject) – Source image (uint8 or rgba32).

  • keypoints (itom.dataObject) – keypoints of the source image (n x 7) float32 data object

  • outImage (itom.dataObject) – Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.

  • color (int, optional) –

    color of keypoints (pass a rgba32 value). If 0 or omitted, random colors will be used.

    All values allowed, Default: 0

  • flags (int, optional) –

    flags for drawing features (bit-combination): - 0: DEFAULT (Output image matrix will be created (Mat::create), i.e. existing memory of output image may be reused. Two source images, matches, and single keypoints will be drawn. For each keypoint, only the center point will be drawn (without a circle around the keypoint with the keypoint size and orientation). - 1: DRAW_OVER_OUTIMG: Output image matrix will not be created (using Mat::create). Matches will be drawn on existing content of output image. - 4: DRAW_RICH_KEYPOINTS: For each keypoint, the circle around keypoint with keypoint size and orientation will be drawn.

    Value range: [0, 5], Default: 0

itom.algorithms.cvDrawMatcher(first_image, second_image, first_keypoints, second_keypoints, matches, out_img[, match_color, single_point_color, flags, max_match_distance])#

Draw the obtained matches points between two images. This function draws matches of keypoints from two images in the output image. Match is a line connecting two keypoints (circles).

Parameters:
  • first_image (itom.dataObject) – Input parameter - first image to draw the matching points

  • second_image (itom.dataObject) – Input parameter - second image to draw the matchibg points

  • first_keypoints (itom.dataObject) – keypoints of the first image (n x 7) float32 data object

  • second_keypoints (itom.dataObject) – keypoints of the second image (n x 7) float32 data object

  • matches (itom.dataObject) – Input parameter - Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]]

  • out_img (itom.dataObject) – Output parameter - Output image

  • match_color (int, optional) –

    color of matches (pass a rgba32 value). If 0 or omitted, random colors will be used.

    All values allowed, Default: 0

  • single_point_color (int, optional) –

    color of single keypoints (pass a rgba32 value). If 0 or omitted, random colors will be used.

    All values allowed, Default: 0

  • flags (int, optional) –

    flags for drawing features (bit-combination): - 0: DEFAULT: Output image matrix will be created (Mat::create), i.e. existing memory of output image may be reused. Two source images, matches, and single keypoints will be drawn. For each keypoint, only the center point will be drawn (without a circle around the keypoint with the keypoint size and orientation). - 1: DRAW_OVER_OUTIMG: Output image matrix will not be created (using Mat::create). Matches will be drawn on existing content of output image. - 2: NOT_DRAW_SINGLE_POINTS: Single keypoints will not be drawn. - 4: DRAW_RICH_KEYPOINTS: For each keypoint, the circle around keypoint with keypoint size and orientation will be drawn.

    Value range: [0, 7], Default: 0

  • max_match_distance (float, optional) –

    max match distance that should be drawn. If 0, every match is drawn [default]

    Value range: [0, inf], Default: 0

itom.algorithms.cvErode(sourceObj, destinationObj[, element, anchor, iterations, borderType])#

Erodes every plane of a data object by using a specific structuring element.

This filter applies the erosion method cvErode of OpenCV to every plane in the source data object. The result is contained in the destination object. It can handle data objects of type uint8, uint16, int16, float32 and float64 only.

It is allowed to let the filter work inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated.

The erosion is executed using a structuring element which is (if not otherwise stated) a 3x3 kernel filled with ones. Else you can give an two-dimensional uint8 data object. Then, the function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

dst(x,y) = min_{(x’,y’):element(x’,y’)!=0} src(x+x’,y+y’)

Erosion can be applied several times (parameter ‘iterations’).

Parameters:
  • sourceObj (itom.dataObject) – input data object of type uint8, uint16, int16, float32, float64

  • destinationObj (itom.dataObject) – output image with the same type and size than input (inplace allowed)

  • element (itom.dataObject, optional) – structuring element used for the morpholocial operation (default: None, a 3x3 rectangular structuring element is used). Else: An uint8 data object where values > 0 are considered for the operation.

  • anchor (Sequence[int], optional) – position of the anchor within the element. If not given or if (-1,-1), the anchor is at the element center [default].

  • iterations (int, optional) –

    number of times the morpholocial operation is applied [default: 1]

    Value range: [1, 65000], Default: 1

  • borderType (str, optional) – This string defines how the filter should handle pixels at the border of the matrix. Allowed is CONSTANT [default], REPLICATE, REFLECT, WRAP, REFLECT_101. In case of a constant border, only pixels inside of the element mask are considered (morphologyDefaultBorderValue)

itom.algorithms.cvEstimateAffine3DParams(sources, destinations, output[, inliers, ransacThreshold, confidence])#

Computes an optimal affine transformation between two 3D point sets

The function estimates an optimal 3D affine transformation between two 3D point sets using the RANSAC algorithm. The transformation describes then [destination;1] = output * [source;1] for each point in sources and destinations 3D point set.

Parameters:
  • sources (itom.dataObject) – [n x 3] array of source points (will be converted to float64).

  • destinations (itom.dataObject) – [n x 3] array of destination points (must have the same size than sources, will be converted to float64).

  • output (itom.dataObject) – Output 3D affine transformation matrix 3x4 (float64)

  • inliers (itom.dataObject, optional) – Output vector indicating which points are inliers (uint8)

  • ransacThreshold (float, optional) –

    Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier (default: 3.0)

    Value range: [0, inf], Default: 3

  • confidence (float, optional) –

    Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

    Value range: [0, 1], Default: 0.99

Returns:

ret - return value

Return type:

int

itom.algorithms.cvFFT1D(sourceImage)#

1D-dimentional fourier-transformation using cv::DFT.

This filter tries to perform an inplace FFT for a given line or 2D-dataObject. The FFT is calculated linewise.The result is a complex-dataObject. The axis-scales and units are invertes and modified.

This filter internally calls the ito::dObjHelper::calcCVDFT(dObjImages, false, false, true)-function.

Parameters:

sourceImage (itom.dataObject) – Input Object handle, must be a single plane

itom.algorithms.cvFFT2D(sourceImage)#

2D-dimentional fourier-transformation using cv::DFT.

This filter tries to perform an inplace FFT for a given 2D-dataObject. The FFT is calculated planewise.The result is a complex-dataObject. The axis-scales and units are invertes and modified.

This filter internally calls the ito::dObjHelper::calcCVDFT(dObjImages, false, false, false)-function.

Parameters:

sourceImage (itom.dataObject) – Input Object handle, must be a single plane

itom.algorithms.cvFindChessboardCorners(dataObject, patternSize, corners[, flags])#

Finds the positions of internal corners of the chessboard.

This filter is a wrapper for the cv::method cv::findChessboardCorners. The openCV-function attempts to determine whether the input image is a view of the chessboard pattern and locate the internal chessboard corners. The function returns a non-zero value if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example, a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black squares touch each other. The detected coordinates are approximate, and to determine their positions more accurately, the function calls cornerSubPix().

Remark 1: This function gives only a rough estimation of the positions. For a higher resolutions, you should usethe function cornerSubPix() with different parameters if returned coordinates are not accurate enough.This function is wrapped to itom by the filter ‘cvCornerSubPix’.

Remark 2: The outer frame of the dataObject / the image should not be white but have approximately the same gray value than the bright field.

Remark 3: The bright fields should be free of darker dirt or dust and you should apply a coarse shading correction to improve the results.

Parameters:
  • dataObject (itom.dataObject) – 8bit grayscale input image

  • patternSize (Sequence[int]) – Number of inner corners per chessboard row and column (points_per_row, points_per_column)

  • corners (itom.dataObject) – output: float32-dataObject, [n x 2] with the coordinates of n detected corner points

  • flags (int, optional) –

    OR Combination of various flags:

    • CV_CALIB_CB_ADAPTIVE_THRESH (1) - Use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness) [default],

    • CV_CALIB_CB_NORMALIZE_IMAGE (2) - Normalize the image gamma with equalizeHist() before applying fixed or adaptive thresholding [default],

    • CV_CALIB_CB_FILTER_QUADS (4) - Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads extracted at the contour retrieval stage,

    • CALIB_CB_FAST_CHECK (8) - Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed (recommended to pre-check image).

    Value range: [0, 15], Default: 3

Returns:

result - 0: detection failed, 1: detection has been successful

Return type:

int

itom.algorithms.cvFindCircles(image, circles[, dp, min_dist, canny_threshold, acc_threshold, min_radius, max_radius])#

Finds circles in a grayscale image using the Hough transform.

This filter is a wrapper for the OpenCV-function cv::HoughCircles.The function finds circles in a grayscale image using a modification of the Hough transform.Based on this filter, circles are identified and located.The result is a dataObject where the number of rows corresponds to the number of found circles, each row is (x,y,r).

Parameters:
  • image (itom.dataObject) – input image of type uint8

  • circles (itom.dataObject)

  • dp (float, optional) –

    dp: Inverse ratio of the accumulator resolution to the image resolution.

    Value range: [1, 100], Default: 1

  • min_dist (float, optional) –

    Minimum center distance of the circles.

    Value range: [1, 100000], Default: 20

  • canny_threshold (float, optional) –

    The higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller).

    Value range: [1, 255], Default: 200

  • acc_threshold (float, optional) –

    The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.

    Value range: [1, 255], Default: 100

  • min_radius (int, optional) –

    Min Radius in x/y

    Value range: [0, inf], Default: 0

  • max_radius (int, optional) –

    Max Radius in x/y (if 0: the maximum of the image width or height is taken)

    Value range: [0, inf], Default: 0

itom.algorithms.cvFindFundamentalMat(points1, points2, F[, method, param1, param2, status])#

Calculates a fundamental matrix from the corresponding points in two images.

The epipolar geometry is described by the following equation:

[p_2; 1]^T F [p_1; 1] = 0

where F is a fundamental matrix, p_1 and p_2 are corresponding points in the first and the second images, respectively.

The function calculates the fundamental matrix using one of four methods listed above and returns the found fundamental matrix. Normally just one matrix is found. But in case of the 7-point algorithm, the function may return up to 3 solutions (9 imes 3 matrix that stores all 3 matrices sequentially).

Parameters:
  • points1 (itom.dataObject) – coordinates of the points in the first image, a matrix of type [Nx2], float32 or float64

  • points2 (itom.dataObject) – coordinates of the points in the second image, a matrix of type [Nx2], float32 or float64

  • F (itom.dataObject) – output, fundamental matrix [3x3], float64

  • method (int, optional) –

    Method for computing a fundamental matrix. The following values are possible: , CV_FM_7POINT (1), CV_FM_8POINT (2) [default], CV_FM_RANSAC (8), CV_FM_LMEDS (4)

    Value range: [1, 8], Default: 2

  • param1 (float, optional) –

    Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

    Value range: [0, inf], Default: 3

  • param2 (float, optional) –

    Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

    Value range: [0, 1], Default: 0.99

  • status (itom.dataObject, optional) – Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s. If not given, no status information is returned.

itom.algorithms.cvFindHomography(srcPoints, dstPoints, homography[, interpolation, ransacReprojThreshold])#

Finds a perspective transformation between two planes.

The functions find and return the perspective transformation H between the source and the destination planes:

s_i \begin{bmatrix}{x'_i}\\{y'_i}\\{1}\end{bmatrix} \sim H \begin{bmatrix}{x_i}\\{y_i}\\{1}\end{bmatrix}

so that the back-projection error

\sum _i \left(x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right)^2 + \left(y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right)^2

is minimized.

The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is determined up to a scale. Thus, it is normalized so that h_{33}=1.

Parameters:
  • srcPoints (itom.dataObject) – coordinates of the points in the original plane, a matrix of type [Nx2], float32

  • dstPoints (itom.dataObject) – coordinates of the points in the target plane, a matrix of type [Nx2], float32

  • homography (itom.dataObject) – 3x3 homography matrix (output)

  • interpolation (int, optional) –

    Method. The following values are possible: regular method using all points (0) [default], CV_RANSAC (8), CV_LMEDS (4)

    Value range: [0, 4], Default: 0

  • ransacReprojThreshold (float, optional) –

    maximum allowed reprojection error to treat a point pair as an inlier (used for RANSAC only)

    Value range: [0, inf], Default: 3

itom.algorithms.cvFlannBasedMatcher(first_descriptors, second_descriptors, Matching_descriptor[, max_distance, first_keypoints, second_keypoints, first_best_matches_points, second_best_matches_points, good_matches])#

This function uses the nearest search methods to find the best matching points. Matching methods by means of Flann matcher. This includes some nearest neighbour algorithms to calculate the distance between two points.

If desired, this function can also return a filtered list of matches and keypoints (keypoints1 and keypoints2) that only contain matches and keypoints whose matched distances are bounded by max_distance. You only need to indicate parameters belonging to the best-matching process if this max_distance parameter is > 0.0.

Parameters:
  • first_descriptors (itom.dataObject) – Input parameter - (n x 128) float32 data object of descriptors from first image (queryDescriptors). These descriptors can be computed from sift/surf algorithms.

  • second_descriptors (itom.dataObject) – Input parameter - (n x 128) float32 data object of descriptors from second image (trainDescriptors). These descriptors can be computed from sift/surf algorithms.

  • Matching_descriptor (itom.dataObject) – Output parameter - (n x 4) float32 data object of Matching descriptor vectors using FLANN matcher. Every row contains the values (queryIdx,trainIdx,imgIdx,distance)

  • max_distance (float, optional) –

    Maximum distance between two pair of points to calculate the best matching.

    Value range: [0, inf], Default: 0

  • first_keypoints (itom.dataObject, optional) – Optional input parameter - corresponding key points of the first image (n x 7) float32 data object, must have the same number of rows than first_descriptors.

  • second_keypoints (itom.dataObject, optional) – Optional input parameter - corresponding key points of the second image (n x 7) float32 data object, must have the same number of rows than second_descriptors.

  • first_best_matches_points (itom.dataObject, optional) – Optional output parameter - (m x 2) float32 data object of best matching points from first image. each row includes (x and y coordinates), and m is the number of best matching points

  • second_best_matches_points (itom.dataObject, optional) – Optional output parameter - (m x 2) float32 data object of best matching points from second image. each row includes (x and y coordinates), and m is the number of best matching points

  • good_matches (itom.dataObject, optional) – Optional output parameter - (m x 4) float32 data object of good matching descriptor vectors using FLANN matcher. Every row contains the values (queryIdx,trainIdx,imgIdx,distance)

itom.algorithms.cvFlipLeftRight(scrImage, destImage)#

This filter flips the image left to right.

This filter applies the flip method cvFlip of OpenCV with the flipCode > 0 to a 2D source data object. The result is contained in the destination object

It is allowed to let the filter work inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated .

Parameters:
  • scrImage (itom.dataObject) – Input image

  • destImage (itom.dataObject) – Output image

itom.algorithms.cvFlipUpDown(scrImage, destImage)#

This filter flips the image upside down.

This filter applies the flip method cvFlip of OpenCV with the flipCode = 0 to a 2D source data object. The result is contained in the destination object.

It is allowed to let the filter work inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated .

Parameters:
  • scrImage (itom.dataObject) – Input image

  • destImage (itom.dataObject) – Output image

itom.algorithms.cvGetRotationMatrix2D(center, angle, scale, rotationMatrix)#

Calculates an affine matrix of 2D rotation. The function calculates the following matrix:

alpha beta (1 - alpha) * center.x - beta * center.y |

|- beta alpha beta * center.x + (1 - alpha) * center.y |

where alpha = scale * cos(angle), beta = scale * sin(angle) The transformation maps the rotation center to itself. This is not the target, adjust the shift. Thr rotation can be applied by using e. g. the cvWarpAffine filter.

Note: When you want to use the cvWarpAffine method with this rotation matrix your center coordinates must be in the pixel domain.

Parameters:
  • center (Sequence[float]) – center coordinates of the rotation of shape (x, y) in physical values of the source image.

  • angle (float) – Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner).

  • scale (float) – Isotropic scale factor.

  • rotationMatrix (itom.dataObject) – rotation matrix

itom.algorithms.cvIFFT1D(sourceImage)#

1D-dimentional inverse fourier-transformation using cv::DFT.

This filter tries to perform an inplace FFT for a given line or 2D-dataObject. The FFT is calculated linewise.The result is a real-dataObject. The axis-scales and units are invertes and modified.

This filter internally calls the ito::dObjHelper::calcCVDFT(dObjImages, true, true, true)-function.

Parameters:

sourceImage (itom.dataObject) – Input Object handle, must be a single plane

itom.algorithms.cvIFFT2D(sourceImage)#

2D-dimentional inverse fourier-transformation using cv::DFT.

This filter tries to perform an inplace FFT for a given 2D-dataObject. The FFT is calculated planewise.The result is a real-dataObject. The axis-scales and units are invertes and modified.

This filter internally calls the ito::dObjHelper::calcCVDFT(dObjImages, true, true, false)-function.

Parameters:

sourceImage (itom.dataObject) – Input Object handle, must be a single plane

itom.algorithms.cvInitUndistortRectifyMap(cameraMatrix, distCoeffs, size, map1, map2[, R, newCameraMatrix])#

Computes the undistortion and rectification transformation map.

Parameters:
  • cameraMatrix (itom.dataObject) – Input camera matrix A = [[fx 0 cx];[0 fy cy];[0 0 1]]

  • distCoeffs (itom.dataObject) – Input vector of distortion coefficients [1 x 4,5,8] (k1, k2, p1, p2 [, k3[, k4, k5, k6]]) of 4, 5 or 8 elements.

  • size (Sequence[int]) – undistorted image size

  • map1 (itom.dataObject) – The first output map, type is float32.

  • map2 (itom.dataObject) – The second output map, type is float32.

  • R (itom.dataObject, optional) – Rectification transformation in the object space (3x3 matrix). If not given, the identity transformation is used.

  • newCameraMatrix (itom.dataObject, optional) – New camera matrix A’. If not given, the camera matrix is used.

itom.algorithms.cvMedianBlur(sourceImage, destinationImage[, kernelSize])#

Planewise median blur filter.

The function smoothes an image using the median filter with the kernel-size x kernel-size aperture. Each channel of a multi-channel image is processed independently. It can handle data objects of type uint8, uint16, int16, ito::tInt32, float32 and float64 only.

The itom-wrapping does not work inplace currently. A new dataObject is allocated.

Warning: NaN-handling for floats not verified.

Parameters:
  • sourceImage (itom.dataObject) – Image of type Integer or float32

  • destinationImage (itom.dataObject) – Empty dataObject-hanlde. Destination is of source type

  • kernelSize (int, optional) –

    Kernelsize in x/y

    Value range: [3, 255], Default: 3

itom.algorithms.cvMergeChannels(inputObject, outputObject[, alpha])#

Reduces a [4x…xMxN] or [3x…xMxN] uint8 data object to a […xMxN] rgba32 data object where the first dimension is merged into the color type. If the first dimension is equal to 4, the planes are used for the blue, green, red and alpha component, in case of three, the alpha component is set to the optional alpha value.

Parameters:
  • inputObject (itom.dataObject) – uint8 data object with any shape and at least three dimensions

  • outputObject (itom.dataObject) – rgba32 data object

  • alpha (int, optional) –

    if the first dimension of the inputObject is 3, this alpha value is used for all alpha components in the output object

    Value range: [0, 255], Default: 255

itom.algorithms.cvMorphologyEx(sourceObj, destinationObj, operation[, element, anchor, iterations, borderType])#

Erodes every plane of a data object by using a specific structuring element.

Performs advanced morphological transformations.The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.MORPH_ERODE Any of the operations can be done in - place.In case of multi - channel images, each channel is processed independently.).

Parameters:
  • sourceObj (itom.dataObject) – input data object of type uint8, uint16, int16, float32, float64

  • destinationObj (itom.dataObject) – output image with the same type and size than input (inplace allowed)

  • operation (int) –

    This parameters defines the operation type, 0: Erode, 1: Dilate, 2: Open, 3: Close, 4: Gradient, 5: Tophat, 6: Blackhat, 7: Hit or miss

    Value range: [0, 7], Default: 0

  • element (itom.dataObject, optional) – structuring element used for the morpholocial operation (default: None, a 3x3 rectangular structuring element is used). Else: An uint8 data object where values > 0 are considered for the operation.

  • anchor (Sequence[int], optional) – position of the anchor within the element. If not given or if (-1,-1), the anchor is at the element center [default].

  • iterations (int, optional) –

    number of times the morpholocial operation is applied [default: 1]

    Value range: [1, 65000], Default: 1

  • borderType (str, optional) – This string defines how the filter should handle pixels at the border of the matrix. Allowed is CONSTANT [default], REPLICATE, REFLECT, WRAP, REFLECT_101. In case of a constant border, only pixels inside of the element mask are considered (morphologyDefaultBorderValue)

itom.algorithms.cvProjectPoints(inputObject, outputObject, M, distCoeff, RVec, TVec)#

Project points from object into image space using the given calibration matrices, distortion coefficients rotation and tralsation vector.

Parameters:
  • inputObject (itom.dataObject) – input image

  • outputObject (itom.dataObject) – output image that has the size dsize and the same type as input image

  • M (itom.dataObject) – 3x3 camera fundamental matrix

  • distCoeff (itom.dataObject) – matrix with distortion coefficients

  • RVec (itom.dataObject) – rotation vector

  • TVec (itom.dataObject) – translation vector

itom.algorithms.cvRemap(source, destination, map1, map2[, interpolation, borderMode, borderValue])#

Applies a generic geometrical transformation to an image.

The function remap transforms the source image using the specified map:

dst(x,y) = src(map1(x, y), map2(x, y))

where values of pixels with non-integer coordinates are computed using one of available interpolation methods. map_x and map_y can be encoded as separate floating-point maps in map_1 and map_2 respectively, or interleaved floating-point maps of (x,y) in map_1 , or fixed-point maps created by using convertMaps() . The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (~2x) remapping operations. In the converted case, map_1 contains pairs (cvFloor(x), cvFloor(y)) and map_2 contains indices in a table of interpolation coefficients.

Parameters:
  • source (itom.dataObject) – source image

  • destination (itom.dataObject) – destination image. It hast the same size as map1 and the same type as src.

  • map1 (itom.dataObject) – The first map of x values

  • map2 (itom.dataObject) – The second map of y values

  • interpolation (int, optional) –

    Interpolation method. The following values are possible: INTER_NEAREST (0) INTER_LINEAR (1) INTER_CUBIC (2) INTER_LANCZOS4 (4)

    Value range: [0, 4], Default: 1

  • borderMode (int, optional) –

    Pixel extrapolation method. When boderMode == BORDER_TRANSPARENT (5), it means that the pixels in the destination image that corresponds to the outliers in the source image are not modified by the function. The following values are possible: BORDER_CONSTANT (0) BORDER_REPLICATE (1) BORDER_REFLECT (2) BORDER_WRAP (3) BORDER_REFLECT101 (4) BORDER_TRANSPARENT (5) BORDER_DEFAULT (4) BORDER_ISOLATED (16)

    Value range: [0, 16], Default: 0

  • borderValue (float, optional) – value used in case of a constant border. By default, it is 0

itom.algorithms.cvRemoveSpikes(sourceObject, destinationObject[, kernelSize, lowestValue, highestValue, newValue])#

Set single spikes at measurement edges to a new value.

This filter creates a binary mask for the input object. The value of mask(y,x) will be 1 if value of input(y,x) is within the specified range and is finite.The mask is eroded and than dilated by kernel size using openCV cv::erode and cv::dilate with a single iteration. In the last step the value of output(y,x) is set to newValue if mask(y,x) is 0.

It is allowed to let the filter work inplace if you give the same source and destination data object, else the destination data object is verified if it fits to the size and type of the source data object and if not a new one is allocated and the input data is copied to the new object.

Parameters:
  • sourceObject (itom.dataObject) – 32 or 64 bit floating point input image

  • destinationObject (itom.dataObject) – 32 or 64 bit floating point output image

  • kernelSize (int, optional) –

    N defines the N x N kernel

    Value range: [3, 13], Default: 5

  • lowestValue (float, optional) –

    Lowest value to consider as valid

    All values allowed, Default: 0

  • highestValue (float, optional) –

    Highest value to consider as valid

    All values allowed, Default: 1

  • newValue (float, optional) –

    Replacement value for spike elements

    All values allowed, Default: nan

itom.algorithms.cvResize(inputObject, outputObject, fx, fy[, interpolation])#

Resizes an image

The function resize resizes the image ‘inputObject’ down to or up by the specific factors.

To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK). The axisScale properties of the x- and y-axes of the outputObject are divided by fx and fy respectively, while the offset values are multiplied with fx and fy.

Parameters:
  • inputObject (itom.dataObject) – input image (2D after an optional squeeze operation)

  • outputObject (itom.dataObject) – output image, will have the same type than inputObject. Its size corresponds to the size of the input object multiplied with fx and fy respectively.

  • fx (float) –

    scale factor along the horizontal axis.

    Value range: [1e-06, inf], Default: 1

  • fy (float) –

    scale factor along the vertical axis.

    Value range: [1e-06, inf], Default: 1

  • interpolation (int, optional) –

    Interpolation method. The following values are possible:

    INTER_NEAREST (0) INTER_LINEAR (1) INTER_AREA (3) INTER_CUBIC (2) INTER_LANCZOS4 (4)

    Value range: [0, 4], Default: 1

itom.algorithms.cvRotate180(scrImage, destImage)#

This filter rotates the image by 180�.

This filter applies the flip method cvFlip from OpenCV horizontally and vertically to rotate the object. The result is contained in the destination object

It is allowed to let the filter work inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated.

Parameters:
  • scrImage (itom.dataObject) – Input image

  • destImage (itom.dataObject) – Output image

itom.algorithms.cvRotateM90(scrImage, destImage)#

This filter rotates the image by 90� clock wise.

This filter applies the flip method cvFlip and the transpose method cvTranspose of OpenCV to rotate the object. The result is contained in the destination object

It is allowed to let the filter work pseudo inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated.

Parameters:
  • scrImage (itom.dataObject) – Input image

  • destImage (itom.dataObject) – Output image

itom.algorithms.cvRotateP90(scrImage, destImage)#

This filter rotates the image by 90� count clock wise.

This filter applies the flip method cvFlip and the transpose method cvTranspose of OpenCV to rotate the object. The result is contained in the destination object

It is allowed to let the filter work pseudo inplace if you give the same input than destination data object, else the output data object is verified if it fits to the size and type of the source data object and if not a new one is allocated.

Parameters:
  • scrImage (itom.dataObject) – Input image

  • destImage (itom.dataObject) – Output image

itom.algorithms.cvSplitChannels(rgbaObject, outputObject)#

Converts a rgba32 data object (with four channels blue, green, red, alpha) into an output data object of type ‘uint8’ and a shape that has one dimension more than the input object and the first dimension is equal to 4. The four color components are then distributed into the 4 planes of the first dimension.

For instance a 4x5x3, rgba32 data objects leads to a 4x4x5x3 uint8 data object.

Parameters:
  • rgbaObject (itom.dataObject) – rgba32 data object with any shape

  • outputObject (itom.dataObject) – uint8 data object with new shape [4,shape] where shape is the original shape. The inserted 4 dimensions represent the color components (b,g,r,alpha) of the source object.

itom.algorithms.cvThreshold(source, destination, threshold, maxValue, type)#

Applies a fixed-level threshold to each array element..

The function applies fixed-level thresholding to a multiple-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image (compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.

Also, the special values THRESH_OTSU or THRESH_TRIANGLE may be combined with one of the above values. In these cases, the function determines the optimal threshold value using the Otsu’s or Triangle algorithm and uses it instead of the specified thresh.

Note: Currently, the Otsu’s and Triangle methods are implemented only for 8-bit single-channel images.

Parameters:
  • source (itom.dataObject) – source image

  • destination (itom.dataObject) – destination image. It hast the same size as map1 and the same type as src.

  • threshold (float) –

    threshold value.

    Value range: [2.22507e-308, inf], Default: 0

  • maxValue (float) –

    maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

    Value range: [2.22507e-308, inf], Default: 0

  • type (int) –

    threshold type THRESH_BINARY (0) THRESH_BINARY_INV (1) THRESH_TRUNC (2) THRESH_TOZERO (3) THRESH_TOZERO_INV (4) THRESH_MASK (7) THRESH_OTSU (8) THRESH_TRIANGLE (16)

    Value range: [0, 16], Default: 0

itom.algorithms.cvUndistort(source, destination, cameraMatrix, distCoeffs[, newCameraMatrix])#

Transforms an image to compensate for lens distortion.

The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of cvInitUndistortRectifyMap() (with unity R) and cvRemap() (with bilinear interpolation). See the former function for details of the transformation being performed.

Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color).

Parameters:
  • source (itom.dataObject) – Input (distorted) image (all datatypes)

  • destination (itom.dataObject) – Output (corrected) image that has the same size and type as source

  • cameraMatrix (itom.dataObject) – Input camera matrix A = [[fx 0 cx];[0 fy cy];[0 0 1]]

  • distCoeffs (itom.dataObject) – Input vector of distortion coefficients [1 x 4,5,8] (k1, k2, p1, p2 [, k3[, k4, k5, k6]]) of 4, 5 or 8 elements.

  • newCameraMatrix (itom.dataObject, optional) – Camera matrix of the distorted image. By default (if not given), it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix.

itom.algorithms.cvUndistortPoints(source, destination, cameraMatrix, distCoeffs[, R, P])#

Computes the ideal point coordinates from the observed point coordinates.

The function is similar to cvUndistort() and cvInitUndistortRectifyMap() but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to cvProjectPoints() . In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper R is specified.

Parameters:
  • source (itom.dataObject) – Observed point coordinates (Nx2) float32

  • destination (itom.dataObject) – Output (corrected) image that has the same size and type as source

  • cameraMatrix (itom.dataObject) – Input camera matrix A = [[fx 0 cx];[0 fy cy];[0 0 1]]

  • distCoeffs (itom.dataObject) – Input vector of distortion coefficients [1 x 4,5,8] (k1, k2, p1, p2 [, k3[, k4, k5, k6]]) of 4, 5 or 8 elements.

  • R (itom.dataObject, optional) – Rectification transformation in the object space (3x3 matrix). If not given, the identity transformation is used.

  • P (itom.dataObject, optional) – New camera matrix (3x3) or new projection matrix (3x4). If not given, the identity new camera matrix is used.

itom.algorithms.cvWarpAffine(sourceObj, destinationObj, transformationObj, destinationSize[, flags, borderType, borderValue])#

Applies an affine transformation onto a 2D dataObject. The function warpAffine transforms the source dataObject using the specified matrix:

dst(x,y)=src(M11x+M12y+M13,M21x+M22y+M23):

When the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invertAffineTransform and then put in the formula above instead of M.

Note: The rotation matrix of the cvGetRotationMatrix2D filter can be used. The matrix must correspond to the pixel domain.

No metaInformation is set to the destinationObj because the physical units of the target object differ from each other depending on the algorithm parameter.

Parameters:
  • sourceObj (itom.dataObject) – input data object of type uint8, uint16, int16, float32, float64.

  • destinationObj (itom.dataObject) – output image with the same type and size than input (inplace allowed).

  • transformationObj (itom.dataObject) – transformation matrix dataObject of shape 2x3.

  • destinationSize (Sequence[int]) – List of (width, height) of the destination dataObject.

  • flags (str, optional) –

    Combination of interpolation methods (see OpenCV InterpolationFlags) and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation (dst -> src).

    Match: [“NEAREST”, “LINEAR”, “CUBIC”, “AREA”, “LANCZOS4”, “WARP_INVERSE_MAP”], Default: “LINEAR”

  • borderType (str, optional) –

    This string defines how the filter should handle pixels at the border of the destinationObj. In case of a constant border, only pixels inside of the element mask are considered.

    Match: [“CONSTANT”, “REPLICATE”, “REFLECT”, “WRAP”, “REFLECT_101”, “TRANSPARENT”], Default: “CONSTANT”

:param borderValue : value used in case of a constant border; by default, it is 0.0

Value range: [2.22507e-308, inf], Default: 0

:type borderValue : float, optional

itom.algorithms.cvWarpPerspective(inputObject, outputObject, M[, interpolation])#

Applies a perspective transformation to an image

The function warpPerspective transforms the source image using the specified matrix H

Parameters:
  • inputObject (itom.dataObject) – input image

  • outputObject (itom.dataObject) – output image that has the size dsize and the same type as input image

  • M (itom.dataObject) – 3x3 transformation matrix

  • interpolation (int, optional) –

    Interpolation method. The following values are possible: INTER_LINEAR (1), INTER_NEAREST (0)

    Value range: [0, 1], Default: 1