Matlab Bwareaopen equivalent function in OpenCV - count

I'm trying to find similar or equivalent function of Matlabs "Bwareaopen" function in OpenCV?
In MatLab Bwareaopen(image,P) removes from a binary image all connected components (objects) that have fewer than P pixels.
In my 1 channel image I want to simply remove small regions that are not part of bigger ones? Is there any trivial way to solve this?

Take a look at the cvBlobsLib, it has functions to do what you want. In fact, the code example on the front page of that link does exactly what you want, I think.
Essentially, you can use CBlobResult to perform connected-component labeling on your binary image, and then call Filter to exclude blobs according to your criteria.

There is not such a function, but you can
1) find contours
2) Find contours area
3) filter all external contours with area less then threshhold
4) Create new black image
5) Draw left contours on it
6) Mask it with a original image

I had the same problem and came up with a function that uses connectedComponentsWithStats():
def bwareaopen(img, min_size, connectivity=8):
"""Remove small objects from binary image (approximation of
bwareaopen in Matlab for 2D images).
Args:
img: a binary image (dtype=uint8) to remove small objects from
min_size: minimum size (in pixels) for an object to remain in the image
connectivity: Pixel connectivity; either 4 (connected via edges) or 8 (connected via edges and corners).
Returns:
the binary image with small objects removed
"""
# Find all connected components (called here "labels")
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(
img, connectivity=connectivity)
# check size of all connected components (area in pixels)
for i in range(num_labels):
label_size = stats[i, cv2.CC_STAT_AREA]
# remove connected components smaller than min_size
if label_size < min_size:
img[labels == i] = 0
return img
For clarification regarding connectedComponentsWithStats(), see:
How to remove small connected objects using OpenCV
https://www.programcreek.com/python/example/89340/cv2.connectedComponentsWithStats
https://python.hotexamples.com/de/examples/cv2/-/connectedComponentsWithStats/python-connectedcomponentswithstats-function-examples.html

The closest OpenCV solution to your question is the morphological closing or opening.
Say you have white regions in your image that you need to remove. You can use morphological opening. Opening is erosion + dilation, in that order. Erosion is when the white regions in your image are shrunk. Dilation is (the opposite) where white regions in your image are enlarged. When you perform an opening operation, your small white region is eroded until it vanishes. Larger white features will not vanish but will be eroded from the boundary. The subsequent dilation step restores their original size. However, since the small element(s) vanished during the erosion step, they will not appear in the final image after dilation.
For example consider this image where we want to remove the small white regions but retain the 3 large white ellipses. Running the following code removes the white regions and displays the clean image
import cv2
im = cv2.imread('sample.png')
clean = cv2.morphologyEx(im, cv2.MORPH_OPEN, np.ones((10, 10)))
cv2.imshwo("Clean image", clean)
The clean image output would be like this.
The command above uses a square block of size 10 as the kernel. You can modify this to suit your requirement. You can even generate a more advanced kernel using the function getStructuringElement().
Note that if your image is inverted, i.e., with black noise on a white background, you simply need to use the morphological closing operation (cv2.MORPH_CLOSE method) instead of opening. This reverses the order of operation - first the image is eroded and then dilated.

Related

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

How to deal with arbitrary size for Laplacian Pyramid?

Recently I had much fun with the Laplacian Pyramid algorithm (http://persci.mit.edu/pub_pdfs/pyramid83.pdf). But one big problem is that the original paper is limited to 2^m+1*2^n+1 images. My question is: What is the best way to deal with arbitrary w*h instead? I can think of a couple of options:
Up sample the input to the next 2^m+1,2^n+1 up front
Pad even lines. How exactly? Wouldn't it shift the signal?
Shift even lines by half a sample? Wouldn't it loose half a sample?
Does anybody have experience with this? What is the most practical and efficient approach? Also any pointers to papers dealing with this would be very welcome.
One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the average value for the image is a good choice for this). Then encode in the normal way, storing the original image dimensions along with the pyramid. When decoding, decode and then crop to the original size.
This won't introduce any visual artifacts or degradation because you aren't stretching or offsetting the image in any way.
Because the empty space to the right and below the original image is a constant value, the high-pass bands at each level in the image pyramid will be all zero in this area. So if you are using a compression scheme like run length encoding to store each level this will be automatically taken care off and these areas will be compressed to almost nothing. If not then you can simply store the top-left (potentially non-zero) area of each level and then fill out the rest with zeros when decoding.
You could find the min and max x and y bounding rectangle of the non-zero values for each level and store this along with the level, cropped to include only non-zero values. The decoder could also be optimized so that areas of the image that are going to be cropped away are not actually decoded in the first place, by only processing the top-left of each level.
Here's an illustration of the technique:
Instead of just filling the lower-right area with a flat color, you could fill it with horizontally and vertically mirrored copies of the image to the right and below, and a copy mirrored in both directions to the bottom-right, like this:
This will avoid the discontinuities of the first technique, although there will be a discontinuity in dx (e.g. if the value was gradually increasing from left to right it will suddenly be decreasing). Choosing a mirror that keeps dx constant and ddx zero will avoid this second-order discontinuity by linearly extrapolating the values.
Another technique, which is similar to what some JPEG encoders do to pad out an image to a whole number of MCU blocks, is to take the last pixel value of each row and repeat it, and likewise for columns, with the bottom-right-most pixel of the image used to fill the bottom-right area:
This last technique could easily be modified to extrapolate the gradient of values or even the gradient of gradients instead of just repeating the same value for the remainder of the row or column.

Extract pixel coordinates in scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

filter image with opencv

I have an image which I would like to extract the number but in a dynamic way (I don't want to specify a roi because image may vary) so I have to filter it. I tried to detect the horizontal line(to crop the image) but it failed. I would like to detect high density zones in the binary image (the face and the top of the image)
ps:my problem isn't how to extract numbers but to specify the roi
and all the images have the same format
any help would be appreciated(even without code just the big lines)
thanks
the image
I would start form detecting frame of the whole document.
If you google: rectangle detection opencv, you will find lots of examples.
In second stage i would apply inRange to filter purple line and detect it with HoughLines.
This should be enough to calculate ROI.

How do I color edges or draw rects correctly in an R dendrogram?

I generated this dendrogram using R's hclust(), as.dendrogram() and plot.dendrogram() functions.
I used the dendrapply() function and a local function to color leaves, which is working fine.
I have results from a statistical test that indicate if a set of nodes (e.g. the cluster of "_+v\_stat5a\_01_" and "_+v\_stat5b\_01_" in the lower-right corner of the tree) are significant or important.
I also have a local function that I can use with dendrapply() that finds the exact node in my dendrogram which contains significant leaves.
I would like to either (following the example):
Color the edges that join "_+v\_stat5a\_01_" and "_+v\_stat5b\_01_"; or,
Draw a rect() around "_+v\_stat5a\_01_" and "_+v\_stat5b\_01_"
I have the following local function (the details of the "nodes-in-leafList-match-nodes-in-clusterList" condition aren't important, but that it highlights significant nodes):
markSignificantClusters <<- function (n) {
if (!is.leaf(n)) {
a <- attributes(n)
leafList <- unlist(dendrapply(n, listLabels))
for (clusterIndex in 1:length(significantClustersList[[1]])) {
clusterList <- unlist(significantClustersList[[1]][clusterIndex])
if (nodes-in-leafList-match-nodes-in-clusterList) {
# I now have a node "n" that contains significant leaves, and
# I'd like to use a dendrapply() call to another local function
# which colors the edges that run down to the leaves; or, draw
# a rect() around the leaves
}
}
}
}
From within this if block, I have tried calling dendrapply(n, markEdges), but this did not work:
markEdges <<- function (n) {
a <- attributes(n)
attr(n, "edgePar") <- c(a$edgePar, list(lty=3, col="red"))
}
In my ideal example, the edges connecting "_+v\_stat5a\_01_" and "_+v\_stat5b\_01_" would be dashed and of a red color.
I have also tried using rect.hclust() within this if block:
ma <- match(leafList, orderedLabels)
rect.hclust(scoreClusterObj, h = a$height, x = c(min(ma), max(ma)), border = 2)
But the result does not work with horizontal dendrograms (i.e. dendrograms with horizontal labels). Here is an example (note the red stripe in the lower-right corner). Something is not correct about the dimensions of what rect.hclust() generates, and I don't know how it works, to be able to write my own version.
I appreciate any advice for getting edgePar or rect.hclust() to work properly, or to be able to write my own rect.hclust() equivalent.
UPDATE
Since asking this question, I used getAnywhere(rect.hclust()) to get the functional code that calculates parameters and draws the rect object. I wrote a custom version of this function to handle horizontal and vertical leaves, and call it with dendrapply().
However, there is some kind of clipping effect that removes part of the rect. For horizontal leaves (leaves that are drawn on the right side of the tree), the rightmost edge of the rect either disappears or is thinner than the border width of the other three sides of the rect. For vertical leaves (leaves that are drawn on the bottom of the tree), the bottommost edge of the rect suffers the same display problem.
What I had done as a means of marking significant clusters is to reduce the width of the rect such that I render a vertical red stripe between the tips of the cluster edges and the (horizontal) leaf labels.
This eliminates the clipping issue, but introduces another problem, in that the space between the cluster edge tips and the leaf labels is only six or so pixels wide, which I don't have much control over. This limits the width of the vertical stripe.
The worse problem is that the x-coordinate that marks where the vertical stripe can fit between the two elements will change based on the width of the larger tree (par["usr"]), which in turn depends on how the tree hierarchy ends up being structured.
I wrote a "correction" or, better termed, a hack to adjust this x value and the rect width for horizontal trees. It doesn't always work consistently, but for the trees I am making, it seems to keep from getting too close to (or overlapping) edges and labels.
Ultimately, a better fix would be to find out how to draw the rect so that there is no clipping. Or a consistent way to calculate the specific x position in between tree edges and labels for any given tree, so as to center and size the stripe properly.
I would also be very interested in a method for annotating edges with colors or line styles.
So you've actually asked about five questions (5 +/- 3). As far as writing your own rect.hclust like function, the source is in library/stats/R/identify.hclust.R if you want to look at it.
I took a quick glance at it myself and am not sure it does what I thought it did from reading your description--it seems to be drawing multiple rectangles, Also, the x selector appears to be hard coded to segregate the tags horizontally (which isn't what you want and there's no y).
I'll be back, but in the meantime you might (in addition to looking at the source) try doing multiple rect.hclust with different border= colors and different h= values to see if a failure pattern emerges.
Update
I haven't had much luck poking at this either.
One possible kludge for the clipping would be to pad the labels with trailing spaces and then bring the edge of your rectangle in slightly (the idea being that just bringing the rectangle in would get it out of the clipping zone but overwrite the ends of the labels).
Another idea would be to fill the rectangle with a translucent (low alpha) color, making a shaded area rather than a bounding box.

Resources