Optional
fillValue: number | number[]CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F ...
Optional
step: numberCV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F ...
Create a Mat having the given size. The constructor build n-Dimmentional Mat
added in opencv4node 6.2.0
Optional
type: numberCV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F ...
Readonly
channelsMat channels like python .shape[2]
Readonly
colsMat width like python .shape[1]
Readonly
depthReadonly
dimsReadonly
elemReadonly
emptyReadonly
rowsMat height like python .shape[0]
Readonly
sizesReadonly
stepReadonly
typeComputes useful camera characteristics from the camera intrinsic matrix.
Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for the chessboard pitch (it can thus be any value).
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga87955f4330d5c20e392b265b7f92f691
Input image size in pixels.
Physical width in mm of the sensor.
Physical height in mm of the sensor.
Optional
connectivity: numberOptional
ltype: numberOptional
connectivity?: numberOptional
ltype?: numberOptional
connectivity: numberOptional
ltype: numberOptional
connectivity?: numberOptional
ltype?: numberOptional
connectivity: numberOptional
ltype: numberOptional
connectivity?: numberOptional
ltype?: numberCalculates the distance to the closest zero pixel for each pixel of the source image.
https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#ga8a0b7fdfcb7a13dde018988ba3a43042
Type of distance, see DistanceTypes
Size of the distance transform mask, see DistanceTransformMasks. DIST_MASK_PRECISE is not supported by this variant. In case of the DIST_L1 or DIST_C distance type, the parameter is forced to 3 because a 3×3 mask gives the same result as 5×5 or any larger aperture.
Optional
dstType: numberType of output image. It can be CV_8U or CV_32F. Type CV_8U can be used only for the first variant of the function and distanceType == DIST_L1.
Draws contours outlines or filled contours.
The function draws contour outlines in the image if thickness≥0 or fills the area bounded by the contours if thickness<0 . The example below shows how to retrieve connected components from the binary image and label them: :
https://docs.opencv.org/4.5.4/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc
MatImgprocBindings.h
list of contours
0 based contour index to draw
Optional
color?: Vec3Optional
lineOptional
thickness?: numberOptional
color: Vec3Optional
thickness: numberOptional
lineType: numberVertex of the rectangle.
Vertex of the rectangle opposite to pt1 .
Optional
color?: Vec3Rectangle color or brightness (grayscale image).
Optional
lineType of the line. See LineTypes {@see https://docs.opencv.org/4.x/d6/d6e/group__imgproc__draw.html#gaf076ef45de481ac96e0ab3dc2c29a777}
Optional
shift?: numbershift Number of fractional bits in the point coordinates.
Optional
thickness?: numberThickness of lines that make up the rectangle. Negative values, like FILLED, mean that the function has to draw a filled rectangle. {@see https://docs.opencv.org/4.x/d6/d6e/group__imgproc__draw.html#ggaf076ef45de481ac96e0ab3dc2c29a777a89c5f6beef080e6df347167f85e07b9e}
Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at (x,y) is considered to belong to the repainted domain if:
https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#ga366aae45a6c1289b341d140839f18717
Starting point.
New value of the repainted domain pixels.
Optional
mask: MatOperation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
Optional
loDiff: TMaximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
Optional
upDiff: TMaximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
Optional
flags: TOperation flags. The first 8 bits contain a connectivity value. The default value of 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see FloodFillFlags.
if Mat.dims <= 2
https://github.com/justadudewhohacks/opencv4nodejs/issues/329
Note this method offer low performances, use getData instead.
if Mat.dims > 2 (3D)
The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter. By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original image pixels if there is valuable information in the corners alpha=1 , or get something in between. When alpha>0 , the undistorted result is likely to have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to initUndistortRectifyMap to produce the maps for remap.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga7a6c4e032c97f03ba747966e6ad862b1
Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
Original image size.
Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). See stereoRectify for details.
Optional
newImageSize: SizeImage size after rectification. By default, it is set to imageSize .
Optional
centerPrincipalPoint: booleanOptional flag that indicates whether in the new camera intrinsic matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.
Optional
mask: MatOptional
blockSize: numberOptional
gradientSize: numberOptional
useHarrisDetector: booleanOptional
harrisK: numberOptional
mask: MatOptional
blockSize: numberOptional
gradientSize: numberOptional
useHarrisDetector: booleanOptional
harrisK: numberCalculates the integral of an image.
https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#ga97b87bec26908237e8ba0f6e96d23e28
Optional
sdepth: numberdesired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.
Optional
sqdepth: numberdesired depth of the integral image of squared pixel values, CV_32F or CV_64F.
Optional
sdepth?: numberOptional
sqdepth?: numberOptional
sdepth: numberOptional
sqdepth: numberOptional
sdepth?: numberOptional
sqdepth?: numberCompares a template against overlapped image regions.
The function slides through image , compares the overlapped patches of size w×h against templ using the specified method and stores the comparison results in result . TemplateMatchModes describes the formulae for the available comparison methods ( I denotes image, T template, R result, M the optional mask ). The summation is done over template and/or the image patch: x′=0...w−1,y′=0...h−1 After the function finishes the comparison, the best matches can be found as global minimums (when TM_SQDIFF was used) or maximums (when TM_CCORR or TM_CCOEFF was used) using the minMaxLoc function. In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.
https://docs.opencv.org/4.x/df/dfb/group__imgproc__object.html#ga586ebfb0a7fb604b35a23d85391329be
Searched template. It must be not greater than the source image and have the same data type.
Parameter specifying the comparison method, can be one of TM_SQDIFF, TM_SQDIFF_NORMED, TM_CCORR, TM_CCORR_NORMED, TM_CCOEFF, TM_CCOEFF_NORMED.
Optional
mask: MatOptional mask. It must have the same size as templ. It must either have the same number of channels as template or only one channel, which is then used for all template and image channels. If the data type is CV_8U, the mask is interpreted as a binary mask, meaning only elements where mask is nonzero are used and are kept unchanged independent of the actual mask value (weight equals 1). For data tpye CV_32F, the mask values are used as weights. The exact formulas are documented in TemplateMatchModes.
Map of comparison results. It must be single-channel 32-bit floating-point. If image is W×H and templ is w×h , then result is (W−w+1)×(H−h+1) .
Finds the global minimum and maximum in an array.
The function cv::minMaxLoc finds the minimum and maximum element values and their positions. The extremums are searched across the whole array or, if mask is not an empty array, in the specified array region.
The function do not work with multi-channel arrays. If you need to find minimum or maximum elements across all the channels, use Mat::reshape first to reinterpret the array as single-channel. Or you may extract the particular channel using either extractImageCOI , or mixChannels , or split .
https://docs.opencv.org/4.x/d2/de8/group__core__array.html#gab473bf2eb6d14ff97e89b355dac20707
Optional
mask: Matoptional mask used to select a sub-array.
Optional
normType: numberOptional
mask: MatOptional
opts: { Optional
bottomOptional
color?: Vec3Optional
lineOptional
thickness?: numberOptional
opts: { Optional
bottomOptional
color?: Vec3Optional
lineOptional
thickness?: numberDecrements the reference counter and deallocates the matrix if needed.
The method decrements the reference counter associated with the matrix data. When the reference counter reaches 0, the matrix data is deallocated and the data and the reference counter pointers are set to NULL's. If the matrix header points to an external data set (see Mat::Mat ), the reference counter is NULL, and the method has no effect in this case.
This method can be called manually to force the matrix data deallocation. But since this method is automatically called in the destructor, or by any other method that changes the data pointer, it is usually not needed. The reference counter decrement and check for 0 is an atomic operation on the platforms that support it. Thus, it is safe to operate on the same matrices asynchronously in different threads. https://docs.opencv.org/4.6.0/d3/d63/classcv_1_1Mat.html#ae48d4913285518e2c21a3457017e716e
Optional
fx: numberOptional
fy: numberOptional
interpolation: numberOptional
fx: numberOptional
fy: numberOptional
interpolation: numberOptional
fx: numberOptional
fy: numberOptional
interpolation: numberOptional
fx: numberOptional
fy: numberOptional
interpolation: numberApplies a separable linear filter to an image.
The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .
https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html#ga910e29ff7d7b105057d1625a4bf6318d
Destination image depth, see combinations
Coefficients for filtering each row.
Coefficients for filtering each column.
Optional
anchor: Point2Anchor position within the kernel. The default value (−1,−1) means that the anchor is at the kernel center.
Optional
delta: numberValue added to the filtered results before storing them.
Optional
borderType: numberPixel extrapolation method, see BorderTypes. BORDER_WRAP is not supported.
output image depth, see combinations; in the case of 8-bit input images it will result in truncated derivatives.
order of the derivative x.
order of the derivative y.
Optional
ksize: 1 | 3 | 5 | 7size of the extended Sobel kernel; it must be 1, 3, 5, or 7.
Optional
scale: numberoptional scale factor for the computed derivative values; by default, no scaling is applied (see getDerivKernels for details).
Optional
delta: numberoptional delta value that is added to the results prior to storing them in dst.
Optional
borderType: numberpixel extrapolation method, see BorderTypes. BORDER_WRAP is not supported.
Optional
borderOptional
delta?: numberOptional
ksize?: 1 | 3 | 5 | 7Optional
scale?: numberOptional
ksize: 1 | 3 | 5 | 7Optional
scale: numberOptional
delta: numberOptional
borderType: numberOptional
borderOptional
delta?: numberOptional
ksize?: 1 | 3 | 5 | 7Optional
scale?: numberComputes rectification transforms for each head of a calibrated stereo camera.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga617b1685d4059c6040827800e72ad2b6
First camera distortion parameters.
Second camera intrinsic matrix.
Second camera distortion parameters.
Size of the image used for stereo calibration.
Rotation matrix from the coordinate system of the first camera to the second camera, see stereoCalibrate.
Translation vector from the coordinate system of the first camera to the second camera, see stereoCalibrate.
Optional
flags: numberOperation flags that may be zero or CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.
Optional
alpha: numberFree scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.
Optional
newImageSize: SizeNew image resolution after rectification. The same size should be passed to initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion.
Calculates the sum of array elements. The function cv::sum calculates and returns the sum of array elements, independently for each channel. https://docs.opencv.org/4.x/d2/de8/group__core__array.html#ga716e10a2dd9e228e4d3c95818f106722 Mat must have from 1 to 4 channels.
Applies a fixed-level threshold to each array element.
The function applies fixed-level thresholding to a multiple-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image ( compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.
Also, the special values THRESH_OTSU or THRESH_TRIANGLE may be combined with one of the above values. In these cases, the function determines the optimal threshold value using the Otsu's or Triangle algorithm and uses it instead of the specified thresh.
Note: Currently, the Otsu's and Triangle methods are implemented only for 8-bit single-channel images. https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#gae8a4a146d1ca78c626a53577199e9c57
threshold value.
maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types
thresholding type (see ThresholdTypes).
This function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#gad3fc9a0c82b08df034234979960b778c
2xN array of feature points in the first image. In the case of the c++ version, it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
2xN array of corresponding points in the second image. In the case of the c++ version, it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
Transforms an image to compensate for lens distortion.
The function transforms an image to compensate radial and tangential lens distortion.
The function is simply a combination of initUndistortRectifyMap (with unity R ) and remap (with bilinear interpolation). See the former function for details of the transformation being performed.
Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color).
A particular subset of the source image that will be visible in the corrected image can be regulated by newCameraMatrix. You can use getOptimalNewCameraMatrix to compute the appropriate newCameraMatrix depending on your requirements.
The camera matrix and the distortion parameters can be determined using calibrateCamera. If the resolution of images is different from the resolution used at the calibration stage, fx,fy,cx and cy need to be scaled accordingly, while the distortion coefficients remain the same.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga69f2545a8b62a6b0fc2ee060dc30559d
Optional
disp12MaxDisp: numberOptional
disp12MaxDisp: numberPerforms a marker-based image segmentation using the watershed algorithm.
The function implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [173] .
Before passing the image to the function, you have to roughly outline the desired regions in the image markers with positive (>0) indices. So, every region is represented as one or more connected components with the pixel values 1, 2, 3, and so on. Such markers can be retrieved from a binary mask using findContours and drawContours (see the watershed.cpp demo). The markers are "seeds" of the future image regions. All the other pixels in markers , whose relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0's. In the function output, each pixel in markers is set to a value of the "seed" components or to -1 at boundaries between the regions.
Note Any two neighbor connected components are not necessarily separated by a watershed boundary (-1's pixels); for example, they can touch each other in the initial marker image passed to the function. https://docs.opencv.org/4.6.0/d3/d47/group__imgproc__segmentation.html#ga3267243e4d3f95165d55a618c65ac6e1
Input/output 32-bit single-channel image (map) of markers. It should have the same size as image.
Static
eyeReturns an identity matrix of the specified size and type.
The method returns a Matlab-style identity matrix initializer, similarly to Mat::zeros. Similarly to Mat::ones, you can use a scale operation to create a scaled identity matrix efficiently:
// make a 4x4 diagonal matrix with 0.1's on the diagonal. Mat A = Mat::eye(4, 4, CV_32F)*0.1;
Note: In case of multi-channels type, identity matrix will be initialized only for the first channel, the others will be set to 0's https://docs.opencv.org/4.x/d3/d63/classcv_1_1Mat.html#a458874f0ab8946136254da37ba06b78b
Number of rows.
Number of columns.
Created matrix type.
Static
onesReturns an array of all 1's of the specified size and type.
The method returns a Matlab-style 1's array initializer, similarly to Mat::zeros. Note that using this method you can initialize an array with an arbitrary value, using the following Matlab idiom:
Mat A = Mat::ones(100, 100, CV_8U)*3; // make 100x100 matrix filled with 3. The above operation does not form a 100x100 matrix of 1's and then multiply it by 3. Instead, it just remembers the scale factor (3 in this case) and use it when actually invoking the matrix initializer.
Note In case of multi-channels type, only the first channel will be initialized with 1's, the others will be set to 0's. https://docs.opencv.org/4.x/d3/d63/classcv_1_1Mat.html#a5e10227b777425407986727e2d26fcdc
Static
zerosReturns a zero array of the specified size and type.
The method returns a Matlab-style zero array initializer. It can be used to quickly form a constant array as a function parameter, part of a matrix expression, or as a matrix initializer:
Mat A; A = Mat::zeros(3, 3, CV_32F);
In the example above, a new matrix is allocated only if A is not a 3x3 floating-point matrix. Otherwise, the existing matrix A is filled with zeros. https://docs.opencv.org/4.x/d3/d63/classcv_1_1Mat.html#a56daa006391a670e9cb0cd08e3168c99
Number of rows.
Number of columns.
Created matrix type.
CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F ...