Indexing a PIL image - python-3.4

Is PIL's image coordinate indexing reversed? Is it indexed as image[column, row]?
I have an image which I open using PIL (pillow)
img = Image.open('picture.png').load()
and when I try to print the pixel value of the first row, second column
print(img[0,1])
I get the pixel value of the 2nd row's first column
Can anyone clear this out?

PIL indexes images in Cartesian co-ordinates. So it is indexed as img[col, row].

Related

How to generate binary mask of an image through regression approach which already has half part blurred (in PYTHON)

I have one original image and from that image i have to generate the following:
The original image is:
original image
blurred image pairs with zigzag portion (users choice) as shown below: in the first image of pair, the lower part in zigzag form is blurred and second part
is exactly the opposite of this.
image a image b
to create binary mask of corresponding image shown in 1st point as shown below:
Mask of image a Mask of image b
Kindly tell how to generate blurred part with random portion and binary mask of the corresponding image.
Also provide the code that is applied to original image and outputs the image a and b as shown in 1st point. then regression approach is applied on 1st point to generate binary mask as shown in 2nd point.
I am not able to do the above two tasks for an image.
Any help will be appreciated.

Get Dicom image position into a sequence

A simple question as i am developing a java application based on dcm4che ...
I want to calculate/find the "position" of a dicom image into its sequence (series). By position i mean to find if this image is first, second etc. in its series. More specifically i would like to calculate/find:
Number of slices into a Sequence
Position of each slice (dicom image) into the Sequence
For the first question i know i can use tag 0020,1002 (however it is not always populated) ... For the second one?
If you are dealing with volumetric image series, best way to order your series is to use the Image Position (Patient) (0020, 0032). This is a required Type 1 tag (should always have value) and it is part of the image plane module. It will contain the X, Y and Z values coordinates representing the upper left corner of the image in mm. If the slices are parallel to each other, only one value should change between the slices.
Please note that the Slice Location (0020, 1041) is an optional (Type 3) element and it may not exist in the DICOM file.
We use the InstanceNumber tag (0x0020, 0x0013) as our first choice for the slice position. If there is no InstanceNumber, or if they are all the same, then we use the SliceLocation tag (0x0020, 0x1041). If neither tag is available, then we give up.
We check the InstanceNumber tag such that the Max(InstanceNumber) - Min(InstanceNumber) + 1 is equal to the number of slices we have in the sequence (just in case some manufacturers start counting at 0 or 1, or even some other number). We check the SliceLocation the same way.
This max - min + 1 is then the number of slices in the sequence (substitute for tag ImagesInAcquisition 0x0020, 0x1002).
Without the ImagesInAcquisition tag, we have no way of knowing in advance how many slices to expect...
I would argue that if the slice location is available, use that. It will be more consistent with the image acquisition. If it is not available, then you'll have to use or compute from the image position (patient) attribute. Part 3 section C.7.6.2.1 has details on these attributes.
The main issue comes when you have a series that is oblique. If you just use the z-value of the image position (patient), it may not change by the slice thickenss/spacing between slices attributes, while the slice location typically will. That can cause confusion to end users.

Extract pixel coordinates in scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

Crop image in scilab

I want to crop image using mouse selection at particular region of interest in scilab,here my code is
I=imread('G:\SCI\FRAME\mixer2.jpg');
I1G = rgb2gray(I);
figure();ShowImage(I1G,'mixer');
IN1G = gca();
rect1 = rubberbox();
ROI1=imcrop(I1G,rect1);disp(ROI1);
But it gives the following error: The rectangle is out of the image range.
and i also use xclick and xgetmouse function for cropping using mouse selection and it also gives the same error.
please give me suggestions for correcting code .
Thanks and Regards
The problem arises from the difference between the image coordinate system (used by imcrop and all the other functions of the SIVP toolbox) and the "regular" coordinate system (used by rubberbox, xcick and all the builtin functions). Images have the first pixel at top-left. On the contrary rubberbox have the zero at bottom left.
To correct this you have to reverse the y (vertical) axes coordinate before applying imcrop():
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\mixer_crop.jpg";
I=imread(imagefile);
I1G=rgb2gray(I);
scf(0); clf(0);
ShowImage(I1G,'mixer');
rect1=rubberbox();
imheight=size(I1G,"r"); //image height
rect1(2)=imheight-rect1(2); //reverse y axes coordinates (0 is at top)
ROI1=imcrop(I1G,rect1);
scf(1); clf(1);
ShowImage(ROI1,'ROI1');

filter image with opencv

I have an image which I would like to extract the number but in a dynamic way (I don't want to specify a roi because image may vary) so I have to filter it. I tried to detect the horizontal line(to crop the image) but it failed. I would like to detect high density zones in the binary image (the face and the top of the image)
ps:my problem isn't how to extract numbers but to specify the roi
and all the images have the same format
any help would be appreciated(even without code just the big lines)
thanks
the image
I would start form detecting frame of the whole document.
If you google: rectangle detection opencv, you will find lots of examples.
In second stage i would apply inRange to filter purple line and detect it with HoughLines.
This should be enough to calculate ROI.

Resources