rescale slope and rescale intercept - dicom

I have few questions about the rescale slope and rescale intercept in CT DICOM:
Are they used to transfer original data from the scanner to Hounsfield data set, in which water is 0 and air is -1000?
I am in the image display group. How can I know the rescale slope and the rescale intercept values?
What's the exact meaning of the rescale slope and the rescale intercept? How are they determined?

The rescale slope and rescale intercept allow to transform the pixel values to HU or other units, as specified in the tag 0028,1054.
For CT images, the unit should be HU (Hounsfield) and the default value is indeed HU when the tag 0028,1054 is not present.
However, the tag may be present and may specify a different unit (OD=optical density, US=unspecified).
The rescale slope and intercept are determined by the manufacturer of the hardware.
If the transformation from original pixel values to Hounsfield or Optical density is not linear, then a LUT is applied.
Check the part 3 of the standard C.11 for more detailed information, and also this answer Window width and center calculation of DICOM image

This is my implementation:
def window_ct(dcm, w, c, ymin, ymax):
"""Windows a CT slice.
http://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_C.11.2.html
Args:
dcm (pydicom.dataset.FileDataset):
w: Window Width parameter.
c: Window Center parameter.
ymin: Minimum output value.
ymax: Maximum output value.
Returns:
Windowed slice.
"""
# convert to HU
b = dcm.RescaleIntercept
m = dcm.RescaleSlope
x = m * dcm.pixel_array + b
# windowing C.11.2.1.2.1 Default LINEAR Function
#
y = np.zeros_like(x)
y[x <= (c - 0.5 - (w - 1) / 2)] = ymin
y[x > (c - 0.5 + (w - 1) / 2)] = ymax
y[(x > (c - 0.5 - (w - 1) / 2)) & (x <= (c - 0.5 + (w - 1) / 2))] = \
((x[(x > (c - 0.5 - (w - 1) / 2)) & (x <= (c - 0.5 + (w - 1) / 2))] - (c - 0.5)) / (w - 1) + 0.5) * (
ymax - ymin) + ymin
return y

Related

calculate the sum of a series with limit of x tends to 1

I want to calculate the sum of the series as below
Lim
X->1 (2/3 - x/3 -(x^2)/3 +(x^3)*2/3 -..). I am not sure whether we have a formula for finding the sum of this kind of series. Tried a lot but couldn't find any. Any help is appreciated.
This seems to be more maths than computing.
It factorises as (1 + x^3 + x^6 + ...)(2 - x - x^2)/3
If x = 1-d (where d is small), then to first order in d, the (2 - x - x^2) term becomes (2 - (1-d) - (1-2d)) = 3d
And the (1 + x^3 + x^6 + ...) term is a geometric progression, with sum 1/(1-x^3), or here 1/(1-(1-d)^3), and the denominator to first order in d is (1 - (1-3d)) = 3d
Hence the whole thing is (1/3d) (3d) / 3 = 1/3
But we can also verify computationally with a value close to 1 (Python code here):
x = 0.999999
s = 0
f = (2 - x - x*x) / 3.
x3 = x ** 3
s_prev = None
while s != s_prev:
s_prev = s
s += f
f *= x3
print(s)
gives:
0.33333355556918565

Deform a triangle along vector to get a specific angle

I am trying to create a binary tree from a lot of segments in 3d space sharing the same origin.
When merging two segments I want to have a specific angle between the lines to the child nodes.
The following image illustrates my problem. C shows the position of the parent node and A and B the child positions. N is the average vector of the vectors from C to A and C to B.
With a given angle, how can I determine point P?
Thanks for any help
P = C + t * ((A + B)/2 - C) t is unknown parameter
PA = A - P PA vector
PB = B - P PB vector
Tan(Fi) = (PA x PB) / (PA * PB) (cross product in the nominator, scalar product in the denominator)
Tan(Fi) * (PA.x*PB.x + PA.y*PB.y) = (PA.x*PB.y - PA.y*PB.x)
this is quadratic equation for t, after solving we will get two (for non-degenerate cases) possible positions of P point (the second one lies at other side of AB line)
Addition:
Let's ax = A.x - A point X-coordinate and so on,
abcx = (ax+bx)/2-cx, abcy = (ay+by)/2-cy
pax = ax-cx - t*abcx, pay = ay-cy - t*abcy
pbx = bx-cx - t*abcx, pby = by-cy - t*abcy
ff = Tan(Fi) , then
ff*(pax*pbx+pay*pby)-pax*pby+pay*pbx=0
ff*((ax-cx - t*abcx)*(bx-cx - t*abcx)+(ay-cy - t*abcy)*(by-cy - t*abcy)) -
- (ax-cx - t*abcx)*(by-cy - t*abcy) + (ay-cy - t*abcy)*(bx-cx - t*abcx) =
t^2 *(ff*(abcx^2+abcy^2)) +
t * (-2*ff*(abcx^2+abcy^2) + abcx*(by-ay) + abcy*(ax-bx) ) +
(ff*((ax-cx)*(bx-cx)+(ay-cy)*(by-cy)) - (ax-cx)*(by-cy)+(bx-cx)*(ay-cy)) =0
This is quadratic equation AA*t^2 + BB*t + CC = 0 with coefficients
AA = ff*(abcx^2+abcy^2)
BB = -2*ff*(abcx^2+abcy^2) + abcx*(by-ay) + abcy*(ax-bx)
CC = ff*((ax-cx)*(bx-cx)+(ay-cy)*(by-cy)) - (ax-cx)*(by-cy)+(bx-cx)*(ay-cy)
P.S. My answer is for 2d-case!
For 3d: It is probably simpler to use scalar product only (with vector lengths)
Cos(Fi) = (PA * PB) / (|PA| * |PB|)
Another solution could be using binary search on the vector N, whether P is close to C then the angle will be smaller and whether P is far from C then the angle will be bigger, being it suitable for a binary search.

Drawing rectangle between two points with arbitrary width

I'm trying to draw a line between two (2D) points when the user swipes their finger across a touch screen. To do this, I plan on drawing a rectangle on every touch update between the X and Y of the previous touch update and the X and Y of the latest touch update. This should create a continuous and solid line as the user swipes their finger across the screen. However, I would also like this line to have an arbitrary width. My question is, how should I go about calculating the coordinates I need for each rectangle (x1, y1, x2, y2)?
--
Also: if anyone has any information on how I could then go about applying anti-aliasing to this line it'd be a massive help.
Calculate a vector between start and end points
V.X := Point2.X - Point1.X;
V.Y := Point2.Y - Point1.Y;
Then calculate a perpendicular to it (just swap X and Y coordinates)
P.X := V.Y; //Use separate variable otherwise you overwrite X coordinate here
P.Y := -V.X; //Flip the sign of either the X or Y (edit by adam.wulf)
Normalize that perpendicular
Length = sqrt(P.X * P.X + P.Y * P.Y); //Thats length of perpendicular
N.X = P.X / Length;
N.Y = P.Y / Length; //Now N is normalized perpendicular
Calculate 4 points that form a rectangle by adding normalized perpendicular and multiplying it by half of the desired width
R1.X := Point1.X + N.X * Width / 2;
R1.Y := Point1.Y + N.Y * Width / 2;
R2.X := Point1.X - N.X * Width / 2;
R2.Y := Point1.Y - N.Y * Width / 2;
R3.X := Point2.X + N.X * Width / 2;
R3.Y := Point2.Y + N.Y * Width / 2;
R4.X := Point2.X - N.X * Width / 2;
R4.Y := Point2.Y - N.Y * Width / 2;
Draw rectangle using these 4 points.
Here's the picture:
EDIT: To answer the comments: If X and Y are the same then the line is exactly diagonal and perpendicular to a diagonal is a diagonal. Normalization is a method of making a length to equal to 1, so that the width of your line in this example will not depend on perpendiculars length (which is equal to lines length here).
Easy way (I'll call the "width" the thickness of the line):
We need to calculate 2 values, the shift on the x axis and the shift on the y axis for each of the 4 corners. Which is easy enough.
The dimensions of the line are:
width = x2 - x1
height = y2 - y1
Now the x shift (let's call it xS):
xS = (thickness * height / length of line) / 2
yS = (thickness * width / length of line) / 2
To find the length of the line, use Pythagoras's theorem:
length = square_root(width * width + height * height)
Now you have the x shift and y shift.
First coordinate is: (x1 - xS, y1 + yS)
Second: (x1 + xS, y1 - yS)
Third: (x2 + xS, y2 - yS)
Fourth: (x2 - xS, y2 + yS)
And there you go! (Those coordinates are drawn counterclockwise, the default for OpenGL)
If I understand you correctly, you have two end points say A(x1,y1) and B(x2,y2) and an arbitrary width for the rectangle say w. I assume the end points will be just at the middle of the rectangle's shorter sides meaning the distance of the final rectangles corner coordinates would be w/2 to A and B.
You can compute the slope of the line by;
s1 = (y2 - y1) / (x2 - x1) // assuming x1 != x2
The slope of the shorter sides is nothing but s2 = -1/s1.
We have slope, we have distance and we have the reference points.
We than can derive two equations for each corner point:
For one corner closer to A
C(x3,y3):
(y3 - y1) / (x3 - x1) = s2 // by slope
(y3 - y1)^2 + (x3 - x1)^2 = (w/2)^2 // by distance
replacing (y3 - y1) by a and (x3 - x1) by b yields
a = b * s2 // slope equation
// replace a by b*s2
b^2 * s2^2 + b^2 = (w/2)^2 // distance equaiton
b^2 = (w/2)^2 / (s2^2+1)
b = sqrt((w/2)^2 / (s2^2+1))
we know w and s2 and hence compute b
If b is known, we can deduce x3
x3 = b + x1
and a, as well
a = b * s2
and so y3
y3 = b*s2 + y1
we have one corner point C(x3,y3).
To compute the other corner point closer to A, say D(x4,y4), the slope equation can be constructed as
(y1 - y4) / (x1 - x4) = s2
and the calculations listed above should be applied.
Other two corners can be calculated with the steps listed here replacing A(x1, y1) with B(x2,y2).

Computing the 3D coordinates on a unit sphere from a 2D point

I have a square bitmap of a circle and I want to compute the normals of all the pixels in that circle as if it were a sphere of radius 1:
The sphere/circle is centered in the bitmap.
What is the equation for this?
Don't know much about how people program 3D stuff, so I'll just give the pure math and hope it's useful.
Sphere of radius 1, centered on origin, is the set of points satisfying:
x2 + y2 + z2 = 1
We want the 3D coordinates of a point on the sphere where x and y are known. So, just solve for z:
z = Âħsqrt(1 - x2 - y2).
Now, let us consider a unit vector pointing outward from the sphere. It's a unit sphere, so we can just use the vector from the origin to (x, y, z), which is, of course, <x, y, z>.
Now we want the equation of a plane tangent to the sphere at (x, y, z), but this will be using its own x, y, and z variables, so instead I'll make it tangent to the sphere at (x0, y0, z0). This is simply:
x0x + y0y + z0z = 1
Hope this helps.
(OP):
you mean something like:
const int R = 31, SZ = power_of_two(R*2);
std::vector<vec4_t> p;
for(int y=0; y<SZ; y++) {
for(int x=0; x<SZ; x++) {
const float rx = (float)(x-R)/R, ry = (float)(y-R)/R;
if(rx*rx+ry*ry > 1) { // outside sphere
p.push_back(vec4_t(0,0,0,0));
} else {
vec3_t normal(rx,sqrt(1.-rx*rx-ry*ry),ry);
p.push_back(vec4_t(normal,1));
}
}
}
It does make a nice spherical shading-like shading if I treat the normals as colours and blit it; is it right?
(TZ)
Sorry, I'm not familiar with those aspects of C++. Haven't used the language very much, nor recently.
This formula is often used for "fake-envmapping" effect.
double x = 2.0 * pixel_x / bitmap_size - 1.0;
double y = 2.0 * pixel_y / bitmap_size - 1.0;
double r2 = x*x + y*y;
if (r2 < 1)
{
// Inside the circle
double z = sqrt(1 - r2);
.. here the normal is (x, y, z) ...
}
Obviously you're limited to assuming all the points are on one half of the sphere or similar, because of the missing dimension. Past that, it's pretty simple.
The middle of the circle has a normal facing precisely in or out, perpendicular to the plane the circle is drawn on.
Each point on the edge of the circle is facing away from the middle, and thus you can calculate the normal for that.
For any point between the middle and the edge, you use the distance from the middle, and some simple trig (which eludes me at the moment). A lerp is roughly accurate at some points, but not quite what you need, since it's a curve. Simple curve though, and you know the beginning and end values, so figuring them out should only take a simple equation.
I think I get what you're trying to do: generate a grid of depth data for an image. Sort of like ray-tracing a sphere.
In that case, you want a Ray-Sphere Intersection test:
http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter1.htm
Your rays will be simple perpendicular rays, based off your U/V coordinates (times two, since your sphere has a diameter of 2). This will give you the front-facing points on the sphere.
From there, calculate normals as below (point - origin, the radius is already 1 unit).
Ripped off from the link above:
You have to combine two equations:
Ray: R(t) = R0 + t * Rd , t > 0 with R0 = [X0, Y0, Z0] and Rd = [Xd, Yd, Zd]
Sphere: S = the set of points[xs, ys, zs], where (xs - xc)2 + (ys - yc)2 + (zs - zc)2 = Sr2
To do this, calculate your ray (x * pixel / width, y * pixel / width, z: 1), then:
A = Xd^2 + Yd^2 + Zd^2
B = 2 * (Xd * (X0 - Xc) + Yd * (Y0 - Yc) + Zd * (Z0 - Zc))
C = (X0 - Xc)^2 + (Y0 - Yc)^2 + (Z0 - Zc)^2 - Sr^2
Plug into quadratic equation:
t0, t1 = (- B + (B^2 - 4*C)^1/2) / 2
Check discriminant (B^2 - 4*C), and if real root, the intersection is:
Ri = [xi, yi, zi] = [x0 + xd * ti , y0 + yd * ti, z0 + zd * ti]
And the surface normal is:
SN = [(xi - xc)/Sr, (yi - yc)/Sr, (zi - zc)/Sr]
Boiling it all down:
So, since we're talking unit values, and rays that point straight at Z (no x or y component), we can boil down these equations greatly:
Ray:
X0 = 2 * pixelX / width
Y0 = 2 * pixelY / height
Z0 = 0
Xd = 0
Yd = 0
Zd = 1
Sphere:
Xc = 1
Yc = 1
Zc = 1
Factors:
A = 1 (unit ray)
B
= 2 * (0 + 0 + (0 - 1))
= -2 (no x/y component)
C
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2 + (0 - 1) ^ 2 - 1
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2
Discriminant
= (-2) ^ 2 - 4 * 1 * C
= 4 - 4 * C
From here:
If discriminant < 0:
Z = ?, Normal = ?
Else:
t = (2 + (discriminant) ^ 1 / 2) / 2
If t < 0 (hopefully never or always the case)
t = -t
Then:
Z: t
Nx: Xi - 1
Ny: Yi - 1
Nz: t - 1
Boiled farther still:
Intuitively it looks like C (X^2 + Y^2) and the square-root are the most prominent figures here. If I had a better recollection of my math (in particular, transformations on exponents of sums), then I'd bet I could derive this down to what Tom Zych gave you. Since I can't, I'll just leave it as above.

Trilateration with limits?

I'm in need of help solving an issue, the problem came up doing one of my small robot experiments, the basic idea, is that each little robot has the ability to approximate the distance, from themselves to an object, however the approximate I'm getting is way too rough, and I'm hoping to calculate something more accurate.
So:
Input: A list of vertex (v_1, v_2, ... v_n), a vertex v_* (robots)
Output: The coordinates for the unknown vertex v_* (object)
Each vertex v_1 to v_n's coordinates are well known (supplied by calling getX() and getY() on the vertex), and its possible to get the approximate range to v_* by calling; getApproximateDistance(v_*), function getApproximateDistance() returns two variables variables, that is; minDistance and maxDistance. - The actual distance lies in between these.
So what I've been trying to do to obtain the coordinates for v_*, is to use trilateration, however I can't seem to find a formula for doing trilateration with limits (lower and upperbound), so that's really what I'm looking for (not really good enough at math, to figure it out myself).
Note: is triangulation the way to go instead?
Note: I would possibly love to know a way to do, performance/accuracy trade-offs.
An example of data:
[Vertex . `getX()` . `getY()` . `minDistance` . `maxDistance`]
[`v_1` . 2 . 2 . 0.5 . 1 ]
[`v_2` . 1 . 2 . 0.3 . 1 ]
[`v_3` . 1.5 . 1 . 0.3 . 0.5]
Picture to show data: http://img52.imageshack.us/img52/6414/unavngivetcb.png
It's obvious that the approximate for v_1 can be better, than [0.5; 1], as the figure that the above data creates is small cut of a annulus (limited by v_3), however how would I calculate that, and possibly find the approximate within that figure (this figure is possibly concave)?
Would this be better suited for MathOverflow?
I would go for a simple discrete approach. The implicit formula for an annulus is trivial and the intersection of multiple annulus if the number of them is high can be computed somewhat efficently with a scanline based approach.
For getting high accuracy with a fast computation an option could be using a multiresolution approach (i.e. first starting in low-res and then recomputing in high-res only samples that are close to a valid point.
A small python toy I wrote can generate a 400x400 pixel image of the intersection area in about 0.5 secs (this is the kind of computation that would get a 100x speedup if done with C).
# x, y, r0, r1
data = [(2.0, 2.0, 0.5, 1.0),
(1.0, 2.0, 0.3, 1.0),
(1.5, 1.0, 0.3, 0.5)]
x0 = max(x - r1 for x, y, r0, r1 in data)
y0 = max(y - r1 for x, y, r0, r1 in data)
x1 = min(x + r1 for x, y, r0, r1 in data)
y1 = min(y + r1 for x, y, r0, r1 in data)
def hit(x, y):
for cx, cy, r0, r1 in data:
if not (r0**2 <= ((x - cx)**2 + (y - cy)**2) <= r1**2):
return False
return True
res = 400
step = 16
white = chr(255)
grey = chr(192)
black = chr(0)
img = [black] * (res * res)
# Low-res pass
cells = {}
for i in xrange(0, res, step):
y = y0 + i * (y1 - y0) / res
for j in xrange(0, res, step):
x = x0 + j * (x1 - x0) / res
if hit(x, y):
for h in xrange(-step*2, step*3, step):
for v in xrange(-step*2, step*3, step):
cells[(i+v, j+h)] = True
# High-res pass
for i in xrange(0, res, step):
for j in xrange(0, res, step):
if cells.get((i, j), False):
img[i * res + j] = grey
img[(i + step - 1) * res + j] = grey
img[(i + step - 1) * res + (j + step - 1)] = grey
img[i * res + (j + step - 1)] = grey
for v in xrange(step):
y = y0 + (i + v) * (y1 - y0) / res
for h in xrange(step):
x = x0 + (j + h) * (x1 - x0) / res
if hit(x, y):
img[(i + v)*res + (j + h)] = white
open("result.pgm", "wb").write(("P5\n%i %i 255\n" % (res, res)) +
"".join(img))
Another interesting option could be using a GPU if available. Starting from a white picture and drawing in black the exterior of each annulus will leave at the end the intersection area in white.
For example with Python/Qt the code for doing this computation is simply:
img = QImage(res, res, QImage.Format_RGB32)
dc = QPainter(img)
dc.fillRect(0, 0, res, res, QBrush(QColor(255, 255, 255)))
dc.setPen(Qt.NoPen)
dc.setBrush(QBrush(QColor(0, 0, 0)))
for x, y, r0, r1 in data:
xa1 = (x - r1 - x0) * res / (x1 - x0)
xb1 = (x + r1 - x0) * res / (x1 - x0)
ya1 = (y - r1 - y0) * res / (y1 - y0)
yb1 = (y + r1 - y0) * res / (y1 - y0)
xa0 = (x - r0 - x0) * res / (x1 - x0)
xb0 = (x + r0 - x0) * res / (x1 - x0)
ya0 = (y - r0 - y0) * res / (y1 - y0)
yb0 = (y + r0 - y0) * res / (y1 - y0)
p = QPainterPath()
p.addEllipse(QRectF(xa0, ya0, xb0-xa0, yb0-ya0))
p.addEllipse(QRectF(xa1, ya1, xb1-xa1, yb1-ya1))
p.addRect(QRectF(0, 0, res, res))
dc.drawPath(p)
and the computation part for an 800x800 resolution image takes about 8ms (and I'm not sure it's hardware accelerated).
If only the barycenter of the intersection is to be computed then there is no memory allocation at all. For example a "brute-force" approach is just a few lines of C
typedef struct TReading {
double x, y, r0, r1;
} Reading;
int hit(double xx, double yy,
Reading *readings, int num_readings)
{
while (num_readings--)
{
double dx = xx - readings->x;
double dy = yy - readings->y;
double d2 = dx*dx + dy*dy;
if (d2 < readings->r0 * readings->r0) return 0;
if (d2 > readings->r1 * readings->r1) return 0;
readings++;
}
return 1;
}
int computeLocation(Reading *readings, int num_readings,
int resolution,
double *result_x, double *result_y)
{
// Compute bounding box of interesting zone
double x0 = -1E20, y0 = -1E20, x1 = 1E20, y1 = 1E20;
for (int i=0; i<num_readings; i++)
{
if (readings[i].x - readings[i].r1 > x0)
x0 = readings[i].x - readings[i].r1;
if (readings[i].y - readings[i].r1 > y0)
y0 = readings[i].y - readings[i].r1;
if (readings[i].x + readings[i].r1 < x1)
x1 = readings[i].x + readings[i].r1;
if (readings[i].y + readings[i].r1 < y1)
y1 = readings[i].y + readings[i].r1;
}
// Scan processing
double ax = 0, ay = 0;
int total = 0;
for (int i=0; i<=resolution; i++)
{
double yy = y0 + i * (y1 - y0) / resolution;
for (int j=0; j<=resolution; j++)
{
double xx = x0 + j * (x1 - x0) / resolution;
if (hit(xx, yy, readings, num_readings))
{
ax += xx; ay += yy; total += 1;
}
}
}
if (total)
{
*result_x = ax / total;
*result_y = ay / total;
}
return total;
}
And on my PC can compute the barycenter with resolution = 100 in 0.08 ms (x=1.50000, y=1.383250) or with resolution = 400 in 1.3ms (x=1.500000, y=1.383308). Of course a double-step speedup could be implemented even for the barycenter-only version.
I would switch from "max/min" to trying to minimize an error function. That gets you to the problem discussed at Finding a point that best fits the intersection of n spheres which is more tractable than intersecting a series of complicated shapes. (And what if one robot's sensor is messed up and it gives an impossible value? That variation will still usually give a reasonable answer.)
Not sure about your case, but in a typical robotics application you're going to be reading sensors periodically and crunching the data. If that's the case, you're trying to estimate the location based on noisy data and that's a common problem. As a simple (less rigorous) method, you could take the existing position and adjust it toward or away from each known point. Take the measured distance to target minus the present distance to target, multiply that delta (error) by some value between 0 and 1, and move your estimated position that much toward the target. Repeat for each target. Then repeat each time you get a new set of measurements. The multiplier will have an effect like a low-pass filter, smaller values will give you a more stable position estimate with slower response to movement. For the distance, use the average of the min and max. If you can put tighter bounds on the range to one target, you can increase the multiplier closer to 1 for just that target.
This is of course a crude position estimator. The math guys can probably be more rigorous, but also more complicated. The solution is definitely not anything to do with intersecting areas and working with geometric shapes.

Resources