Any idea how can I convert Normalized Vertex to Vertex? Normalized Vertex gives me relative position on an image whereas the Vertex returns coordinates based on the scale of the image. I have a set of Normalized Vertex that I would like to convert it to regular Vertex.
https://cloud.google.com/vision/docs/reference/rest/v1p2beta1/images/annotate#normalizedvertex
If you know the size of your image (it's width and height), you can simply multiply it with your NorrmalizedVertex.
You should multiply the x field of your NormalizedVertex with the width of the picture to get the x field of the Vertex and multiply the y field of your NormalizedVertex with the height of the image to get the y of the Vertex.
Related
If the Image Orientation (Patient) tag (0020,0037) reads [1,0,0,0,1,0] and the Patient Position tag (0018, 5100) reads ‘HFS’, how do I interpret Slice Location tag (0020,1041), assuming that it exists?
I know that it represents the `Relative position of the image plane in millimeters', I'm just having trouble relating the end points of the range to the Z axis in the DICOM Reference Coordinates System (RCS).
Example: I have an sequence of Slice Location numbers in the range: [-1873.382, -771.782]
Since the numbers are increasing and in the DICOM RCS, the Z axis increases in the Inferior to Superior direction, can I conclude that '-1873.382' is the position of the most Inferior slice?
Also, just to note that the z coordinate of my Image Position (Patient) (0020,0032) attribute for each slice, contains the same information as my Slice Location tag.
I still advise against using the Slice Location attribute for sorting. In MR imaging, slices can have arbitrary orientation and even in CT the gantry can be tilted, so you cannot rely that all slices are parallel to the xy-plane. So you actually do not know to which axis the Slice Location refers.
What I do is to subtract ImagePositionPatient from two slices which gives me the direction in which the slices in the stack are moving. Ordering can be done by the amount of the difference vectors.
Image Position (Patient) (0020, 0032) is the x, y, and z coordinates of the upper left hand corner of the image and Image Orientation (0020, 0037) says the direction of the first row and the first column with respect patient (farther defined by patient orientation). X-axis increasing direction is towards the left hand side of the patient, y-axis increasing is towards the posterior side and z-axis increasing is toward the head of the patient.
In your case, if the Z axis is changing and increase is towards the head, I would use the Z-axis values for sorting the stack. It is more reliable than Slice Location. Yes the smallest value (e.g. value -1873.382) is the most Inferior slice.
I am trying to figure out if DICOM Image Position (0020,0032) is an absolute coordinate or just the coordinates for whatever slice orientation I have?
For example, I have two planes, a sagittal and a coronal plane interleaved with respective Image Positions in mm in the form of (x,y,z) from the DICOM header. My question, is the (x,y,z) coordinate for the sagittal plane in the same 3D space as the (x,y,z) coordinate for the coronal plane or are the Image Position values specific for that plane only.
So, is the Image Position referenced off some absolute origin point or is changed for each specific image orientation?
Many thanks!
Yes, the image position (0020,0032) coordinates are absolute coordinates. They are relative to an origin point called the "frame of reference". It doesn't matter where the frame of reference is, but for CT/MRI scanners you can think of it as a fixed point for that particular scanner, relative to the scanner table (the table moves the patient through the scanner, so the frame of reference has to move too - otherwise the z-coodinates wouldn't change!)
What's important when comparing two images is not where the frame of reference is, but whether the same frame of reference is being used. If they are from the same scanner then they probably will be, but the way to check is whether the Frame of Reference UID (0020,0052) is the same.
A few things to note: if you have a stack of 2D slices then the Image Position tag contains the coordinates of the CENTRE of the first voxel of the 2D SLICE (not the whole stack of slices). So it will be different for each slice.
Even if two orthogonal planes line up at an edge, the Image Position coordinates won't necessarily be the same because the voxel dimensions could be different, so the centre of the voxel on one plane isn't necessarily the same as the centre of the voxel on another plane.
Also, it's worth emphasising that the coordinates are relative in some way to the scanner, not to the patient. When your planes are all reconstructed from the same data then everything is consistent. But if two scans were taken at different times then the coordinates of patient features will not necessarily match up as the patient may have moved.
Image Position (Patient) (0020,0032) specifies the origin of the image with respect to the patient-based coordinate system and patient based coordinate system is a right handed system. All three orthogonal planes should share the same Frame of Reference UID (0020,0052) to be spatially related to each other.
Yes, Image position is the absolute values of x, y, and z in the real-world coordinate system.
In MRI we have three different coordinate systems.
1. Real-world coordinate system
2. logical coordinate system
3. anatomical coordinate system.
sometimes they are referred with other names. There are heaps of names on the internet, but conceptually there are three of them.
To uniquely represent the status of the slice in the real world coordinate system we need to pinpoint its position and orientation.
The absolute x, y, and z of the first voxel that is transmitted (the one at the upper left corner of the slice) are considered as the image position. that's straightforward. But that is not enough. what if the slice is rotated?
So we have to determine the orientation as well.
To do that, we consider the first row and column of the image and calculate the cosine of their angles with respect to the main axes of the coordinate system as the image orientation.
Knowing these conventions, by looking at the image position (0020, 0032) and image orientation (0020, 0037) we can precisely pinpoint the slice in the real-world coordinate system.
I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.
Given that:
The shape is a regular polygon in 3D space
The start point (the end of one arbitrary vertex of the shape) is known
the point in the middle of the shape (not on an edge - equidistant from all corners) is known
the angle at each corner (((numEdges-2)*PI)/numEdges), the radius of the shape (distance from a corner to the midpoint = sqrt(dx^2 + dy^2 + dz^2)), and the length of each edge (radius*2*sin(pi/numEdges)) can be calculated.
Given all this information, is it possible to fill in the blanks, if you like, and work out the rest of the start/endpoints for each vertex of the shape?
I can sort of see the beginnings of the logic in 2D, but in 3D i'm lost.
I'm thinking it can't be done, since your knowns do not uniquely identify your polygon. The points you do know define a unique line, but I can provide infinitely many congruent polygons with the same vertex and center, all rotations of one another about this line.
I'd like to map from normalized device coordinates back to viewspace.
The other way arround works like this:
viewspace -> clip space : multiply the homogeneous coordinates by the projection matrix
clip space -> normalized device coordinates: divide the (x,y,z,w) by w
now in normalized device coordinates all coordinates which were within the view frustum fall into the cube x,y,z € [-1,1] and w=1
Now i'd like to transform some points on the boundary of that cube back into view coordinates. The projection matrix is nonsingular, so I can use the inverse to get from clipsace to viewspace. but i don't know how to get from normalized device space to clipspace, since i don't know how to calculate the 'w' i need to multiply the other coordinates with.
can someone help me with that? thanks!
Unless you actually want to recover your clip space values for some reason you don't need to calculate the W. Multiply your NDC point by the inverse of the projection matrix and then divide by W to get back to view space.
The flow graph on the top, and the formulas described on the following page, might help you : http://www.songho.ca/opengl/gl_transform.html