Extract pixel coordinates in scilab - scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks

1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

Related

R/Shiny: How to detect overlapping circle markers?

I'm using Leaflet and Shiny with circle markers. Trying to figure out how to detect if a circle marker overlaps with one or more markers. I need to set the color of each marker based on whether they overlap or not. Have any of you done something like this before? Thankful for any suggestions :)
You can use an accumulator. Represent the empty space as a n by m matrix of 0s, so each cell of the matrix represents a single point of the lowest granularity (like a pixel). Now mark the position of your marking in the accumulator with a 1. If you take the sum of the matrix at this point is should be 1. Now for all points that would be included in your circle mark with a 1 on the same accumulator. Now check the sum of the accumulator, if it is equal to the number of points in the circle + 1 the circle does not cover the point, but if is only equals the number of points the circle the marker is located within the circle.
Edit: If you want to look for overlaps: instead of just setting points to 1, increment the points by 1 for every object including it. So an over laps would have the value of 2 and a triple overlap would be a 3, etc. You could then find these by searching for local or global maximum or minimums.

DICOM why need overlay and how to read it

Just wondering why we need the overlay and when we will need it?
I have a Scout image with overlay, what do these dots mean and what do these numbers or fractions mean?
How these numbers are drawn on the image?
DICOM standard allows two specific types of overlays (graphics and ROI) along with the image and overlays are stored as 1-bit image in Overlay Data (60XX, 0050) attribute. A dataset can have up to 16 separate overpay planes (using the repeating groups encoding).
The overlay plane that represents region of interest (ROI) will have value of “R” for Overlay Type (60xx, 0040) attribute and ROI Area (60xx, 1301), ROI Mean (60xx,1302) and ROI Standard Deviation (60xx, 1303) can be used for the corresponding values of ROI. All bits representing ROI will have a value of 1 that represents the pixels under the boundaries of the actual image data.
Graphic Overlay will have value of “G” in Overlay Type (60xx, 0040) attribute and it is used for expressing reference marks (reference line), graphic annotation, or bitmap text etc. Again, all visible values in an overlay plane are set to 1.
The Overlay Rows (60xx, 0010) and Overlay Columns (60xx,0011) specifies the width and height of the overlay plane. Overlay Bits Allocated is always 1 and Overlay Bit Position is 0 (it was used in previous version and usage has been retired). Overlay Origin (60xx, 0050) is used to described the first overlay point with respect to the pixel in the image and 1\1 represents upper left pixel of the image.
Overlays can be used to display any data over an image. You could, for example, allow users to make annotations or graphics marks. You cannot mark the original data, so the overlay is stored in a separate layer.
In your case, the creator of the overlay should explain its meaning.
The meaning of the overlay is:
i.e. 2/16 -> Series number 2 and slice number 16

How to deal with arbitrary size for Laplacian Pyramid?

Recently I had much fun with the Laplacian Pyramid algorithm (http://persci.mit.edu/pub_pdfs/pyramid83.pdf). But one big problem is that the original paper is limited to 2^m+1*2^n+1 images. My question is: What is the best way to deal with arbitrary w*h instead? I can think of a couple of options:
Up sample the input to the next 2^m+1,2^n+1 up front
Pad even lines. How exactly? Wouldn't it shift the signal?
Shift even lines by half a sample? Wouldn't it loose half a sample?
Does anybody have experience with this? What is the most practical and efficient approach? Also any pointers to papers dealing with this would be very welcome.
One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the average value for the image is a good choice for this). Then encode in the normal way, storing the original image dimensions along with the pyramid. When decoding, decode and then crop to the original size.
This won't introduce any visual artifacts or degradation because you aren't stretching or offsetting the image in any way.
Because the empty space to the right and below the original image is a constant value, the high-pass bands at each level in the image pyramid will be all zero in this area. So if you are using a compression scheme like run length encoding to store each level this will be automatically taken care off and these areas will be compressed to almost nothing. If not then you can simply store the top-left (potentially non-zero) area of each level and then fill out the rest with zeros when decoding.
You could find the min and max x and y bounding rectangle of the non-zero values for each level and store this along with the level, cropped to include only non-zero values. The decoder could also be optimized so that areas of the image that are going to be cropped away are not actually decoded in the first place, by only processing the top-left of each level.
Here's an illustration of the technique:
Instead of just filling the lower-right area with a flat color, you could fill it with horizontally and vertically mirrored copies of the image to the right and below, and a copy mirrored in both directions to the bottom-right, like this:
This will avoid the discontinuities of the first technique, although there will be a discontinuity in dx (e.g. if the value was gradually increasing from left to right it will suddenly be decreasing). Choosing a mirror that keeps dx constant and ddx zero will avoid this second-order discontinuity by linearly extrapolating the values.
Another technique, which is similar to what some JPEG encoders do to pad out an image to a whole number of MCU blocks, is to take the last pixel value of each row and repeat it, and likewise for columns, with the bottom-right-most pixel of the image used to fill the bottom-right area:
This last technique could easily be modified to extrapolate the gradient of values or even the gradient of gradients instead of just repeating the same value for the remainder of the row or column.

Crop image in scilab

I want to crop image using mouse selection at particular region of interest in scilab,here my code is
I=imread('G:\SCI\FRAME\mixer2.jpg');
I1G = rgb2gray(I);
figure();ShowImage(I1G,'mixer');
IN1G = gca();
rect1 = rubberbox();
ROI1=imcrop(I1G,rect1);disp(ROI1);
But it gives the following error: The rectangle is out of the image range.
and i also use xclick and xgetmouse function for cropping using mouse selection and it also gives the same error.
please give me suggestions for correcting code .
Thanks and Regards
The problem arises from the difference between the image coordinate system (used by imcrop and all the other functions of the SIVP toolbox) and the "regular" coordinate system (used by rubberbox, xcick and all the builtin functions). Images have the first pixel at top-left. On the contrary rubberbox have the zero at bottom left.
To correct this you have to reverse the y (vertical) axes coordinate before applying imcrop():
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\mixer_crop.jpg";
I=imread(imagefile);
I1G=rgb2gray(I);
scf(0); clf(0);
ShowImage(I1G,'mixer');
rect1=rubberbox();
imheight=size(I1G,"r"); //image height
rect1(2)=imheight-rect1(2); //reverse y axes coordinates (0 is at top)
ROI1=imcrop(I1G,rect1);
scf(1); clf(1);
ShowImage(ROI1,'ROI1');

Matlab Bwareaopen equivalent function in OpenCV

I'm trying to find similar or equivalent function of Matlabs "Bwareaopen" function in OpenCV?
In MatLab Bwareaopen(image,P) removes from a binary image all connected components (objects) that have fewer than P pixels.
In my 1 channel image I want to simply remove small regions that are not part of bigger ones? Is there any trivial way to solve this?
Take a look at the cvBlobsLib, it has functions to do what you want. In fact, the code example on the front page of that link does exactly what you want, I think.
Essentially, you can use CBlobResult to perform connected-component labeling on your binary image, and then call Filter to exclude blobs according to your criteria.
There is not such a function, but you can
1) find contours
2) Find contours area
3) filter all external contours with area less then threshhold
4) Create new black image
5) Draw left contours on it
6) Mask it with a original image
I had the same problem and came up with a function that uses connectedComponentsWithStats():
def bwareaopen(img, min_size, connectivity=8):
"""Remove small objects from binary image (approximation of
bwareaopen in Matlab for 2D images).
Args:
img: a binary image (dtype=uint8) to remove small objects from
min_size: minimum size (in pixels) for an object to remain in the image
connectivity: Pixel connectivity; either 4 (connected via edges) or 8 (connected via edges and corners).
Returns:
the binary image with small objects removed
"""
# Find all connected components (called here "labels")
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(
img, connectivity=connectivity)
# check size of all connected components (area in pixels)
for i in range(num_labels):
label_size = stats[i, cv2.CC_STAT_AREA]
# remove connected components smaller than min_size
if label_size < min_size:
img[labels == i] = 0
return img
For clarification regarding connectedComponentsWithStats(), see:
How to remove small connected objects using OpenCV
https://www.programcreek.com/python/example/89340/cv2.connectedComponentsWithStats
https://python.hotexamples.com/de/examples/cv2/-/connectedComponentsWithStats/python-connectedcomponentswithstats-function-examples.html
The closest OpenCV solution to your question is the morphological closing or opening.
Say you have white regions in your image that you need to remove. You can use morphological opening. Opening is erosion + dilation, in that order. Erosion is when the white regions in your image are shrunk. Dilation is (the opposite) where white regions in your image are enlarged. When you perform an opening operation, your small white region is eroded until it vanishes. Larger white features will not vanish but will be eroded from the boundary. The subsequent dilation step restores their original size. However, since the small element(s) vanished during the erosion step, they will not appear in the final image after dilation.
For example consider this image where we want to remove the small white regions but retain the 3 large white ellipses. Running the following code removes the white regions and displays the clean image
import cv2
im = cv2.imread('sample.png')
clean = cv2.morphologyEx(im, cv2.MORPH_OPEN, np.ones((10, 10)))
cv2.imshwo("Clean image", clean)
The clean image output would be like this.
The command above uses a square block of size 10 as the kernel. You can modify this to suit your requirement. You can even generate a more advanced kernel using the function getStructuringElement().
Note that if your image is inverted, i.e., with black noise on a white background, you simply need to use the morphological closing operation (cv2.MORPH_CLOSE method) instead of opening. This reverses the order of operation - first the image is eroded and then dilated.

Resources