get uvw coordinates from pixel coordinates for ray-tracing - math

I'm trying to implement a simple ray-tracing algorithm
so the first step is to convert pixel coordinates into uvw coordinates system
iam using those two equations that i found in a book
where l,r,b,t are the view frustum points , (i,j) are the pixel indexes , (nx,ny) are the scene width and height
then to calculate canonical coordinate i use
i want to understand the previous equations and why they give uwv coordinates for perspective projection not for orthogonal projection (when i use orthogonal projection the equation still gives the result as if perspective projection is used)

Let's assume your camera is some sort of a pyramid. It has a bottom face which I'll refer to as the "camera screen", and the height of the pyramid, also known as the focal length, will be marked as F (or in your equations, Ws).
T(op)
*---------*
|\ /|
| \ / |
| \ / |
| \ / |
L(eft) | *E(ye| R(ight)
| / \ |
| / \ |
| / \ |
|/ \|
*---------*
B(ottom)
Let's assume j goes from the bottom to the top (from -Ny/2 to +Ny/2 in steps of 1/Ny), and i goes from left to right (from -Nx/2 to +Nx/2 in steps of 1/Nx). Note that if Ny is even, j goes up to Nx/2-1 (and similar when Nx is even).
As you go from bottom to top in the image, on the screen, you move from the B value to the T value. At the fraction d (between 0=bottom and 1=top) of your way from bottom to top, your height is
Vs = T + (B-T) * d
A bit of messing around shows that the fraction d is actually:
d = (j + 0.5) / Ny
So:
Vs = T + (B-T) * (j + 0.5) / Ny
And similarly:
Us = L + (R-L) * (i + 0.5) / Nx
Now, let's denote U as the vector going from left to right, V from bottom to top, 'W' going from the eye forward. All these vectors are normalized.
Now, assume the eye is located directly above (0,0) where that is exactly above the center of the rectangular face of the pyramid.
To go from the eye directly to (0,0) you would go:
Ws * W
And then to go from that point to another point on the screen at indexes (i,j) you would go:
Us * U + Vs * V
You will be able to see that Us = 0 for i = 0 and Vs = 0 for j = 0 (since B = -T and L = -R, as the eye is directly above the center of the rectangle).
And finally, if we compose it together, a point on the screen at indexes (i,j) is
S = E + Us * U + Vs * V + Ws * W
Enjoy!

Related

Uniformly distribute n points inside an ellipse

How do you uniformly distribute n points inside an ellipse given n and its minor axis length (major axis can be assumed to be the x axis and size 1)? Or if that is impossible, how do you pick n points such that the smallest distance between any two is maximized?
Right now I am uneasy about running expensive electron repulsion simulators (in hopes that there is a better solution like the sunflower function in this question to distribute n points in a circle). n will most likely be between 10 and 100 points, but it would be nice if it worked great for all n
If the ellipse is centered at (0,0), if a=1 is the major radius, b is the minor radius, and the major axis is horizontal, then the ellipse has equation x' A x = 1 where A is the matrix
/ \
| 1 0 |
| 0 1/b² |
\ /
Now, here is a way to uniformly sample inside an ellipse with equation x' A x = r. Take the upper triangular Cholesky factor of A, say U. Here, U is simply
/ \
| 1 0 |
| 0 1/b |
\ /
Pick a point y at random, uniformly, in the circle centered at (0,0) and with radius r. Then x = U^{-1}y is a point uniformly picked at random in the ellipse.
This method works in arbitrary dimension, not only in the two-dimensional case (replacing "circle" with "hypersphere").
So, for our present case, here is the pseudo-code, assuming random() uniformly generates a number between 0 and 1:
R = sqrt(random())
theta = random() * 2 * pi
x1 = R * cos(theta)
x2 = b * R * sin(theta)
Here is a R code to generate n points:
runif_ellipse <- function(n, b){
R <- sqrt(runif(n))
theta <- runif(n) * 2*pi
cbind(R*cos(theta), b*R*sin(theta))
}
points <- runif_ellipse(n = 1000, b = 0.7)
plot(points, asp = 1, pch = 19)
Rather simple approach:
Make initial value for D distance approximation like D=Sqrt(Pi*b/N ) where b is minor axis length.
Generate triangular grid (with equilateral triangles to provide the most dense packing) of points with cell size D. Count number of points lying inside given ellipse.
If it is smaller than N, diminish distance D, of larger - increase D. Repeat until exactly N points are inside.
Dependence CountInside <=> D is monotone for fixed starting point, so you can use binary search to get result faster.
There might be complex cases with 2-4 symmetric points near the border - when they go out or inside simultaneously. If you catch this case, shift starting point a bit.

Rotating a Vector2 around a point

I'm trying to understand what happens when we rotate a Vector around an arbitrary point. If p.x was 0 then the angle would be 90 and I understand that, but I can't visualize why it is 45 when I use p.x = 50.
var v = new THREE.Vector2(100,0);
var p = new THREE.Vector2(50,0);
v.rotateAround(p, 90 * Math.PI/180);
console.log('Angle: ', v.angle() * 180/Math.PI);
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r82/three.min.js">
</script>
You are rotating the point v around the point p. This is done by rotating the vector v-p around the origin and adding the resulting vector (read point translation) back to p.
As v-p=(50,0) the 90° rotation gives (0,50) and adding back pgives the point (50,50) which is at angle 45° relative to the origin, but still straight up from p.
| v after rotation
| o
| .
| .
| .
| .
--o---------+---------o-----
origin p v at start

Math - Difficult Pong AI

I am writing a number of difficulties for my Pong clone I am writing to familiarize myself with SFML and Xcode. For the hardest difficulty, I would like to create an AI level where the computer knows instantly where the ball will go. So, if I had my xVelocity and my yVelocity, I could effectively have the slope. The thing is, every time the ball hits the top or bottom, the yVelocity reverses. So essentially, I have an algebra slope problem which does the opposite every time the walls are hit.
Now, my screen height is 600 pixels and the hit detection is 5 pixels on the top and bottom making the space 590 pixels.
My question: Is there are sort of formula which would encompass all of these factors. Say for instance, the ball is hit at x = 30 and y = 240 with a slope of 1.45, I want to get the y value at which it will hit when x = 770.
Let me know if I can simplify this. Again, I know how I could figure it out by calculating it say 4 times if the ball bounces 4 times but I was wondering if there was a way to figure it out taking in y velocity switch at the boundaries.
Thanks!
Edit: Just read your screen is actually 590 pixels high, this changes the math but not the formulas
Calculate where the ball would hit ignoring the collisions. If (0,0) is the top left of your arena, take y = mx + b, where b is your y offset (ball was hit at y = 240) and m is your slope (1.45)
Now we want to know what y will equal when x is 770-30 places further, so do the math:
y = (1.45)(740) + (240) = 1313
This is obviously outside of your range. It will have reflected
y/height = floor(1313/590) = 2 times
meaning the slope is still moving upward, and it will hit at
y mod height = 1313 mod 590 or 133
If it had reflected an odd number of times (floor(y/2) %2 == 1) then you would have to use the following to calculate it
MAX_HEIGHT - (y mod height) = 590 - (1903 mod 590) = 590 - 133 = 457
You can visualize this by stacking multiple 590 height fields on top of each other, with one being where you started:
--------------------------------------------------------------------
|
|
|
| ball ends up here (*)
| *
| *
| *
------------------------(reflection two)------------*---------------
| *
| *
| *
| *
| *
| *
| *
---------------------*---------(reflection 1)------------------------------
| *
| *
| *
| *
|*ball hit here
|
|
-----------------------------------------------------------------------
The same ideas should apply for going downward. Calculate position, figure out number of reflections, use mod or 590 - mod to determine where it should be.
I haven't tried this but if you know where on the "y-axis" it was hit from and the slope, you'd have a slope intercept formula. Plug in the distance to the opposite side and you can tell if it will above, below or inside the screen. If it will go above or below, calculate what the x will be when y hits the top or bottom, subtract from the total x and repeat.
If the math is getting to you here is a programmatic less elegant
(and slower, not that that matter in most implementations of pong) solution:
tempX = ball.x;
tempY = ball.y;
tempXVel = ball.hspeed;
tempYVel = ball.vspeed;
while(tempX < x) //Assumes ball is traveling at you and you are right paddle
{ //in a X = 0 = left hand side scenario
tempX += tempXVel;
tempY += tempYVel;
if (tempY > 480 or tempY < 0)
tempYVel *= -1; //Y Velocity Switch at boundaries
}
target_Y = tempY;
Basically just do the same logic you do on the ball in a loop and then set that as your target.

Find angle between two points, respective to horizontal axis?

I have two points, one is always at the origin (0,0), and the other can be anywhere else in the world. I'd like to find the angle between them, respective to the horizontal axis.
| 2
| /
| /
| /
| /
|/ a
---1-------------- (horizontal axis)
|
a = angle (~50 degrees, counter clockwise)
In the above I would construct a right triangle and use sohcahtoa to figure out the missing angle I want, but it gets a bit ugly when the second point is in a different quadrant like in this case:
2 |
\ |
\ |
\ |
\a|a
\|a
---1--------------
|
|
a = angle (~135, counter clockwise)
I just end up with a bunch of different cases depending on what quadrant the second point is in. I'm thinking there must be a much simpler, general solution. This is kind of like trying to find the angle between a point on the edge of a circle and its center, respective to the origin's horizontal axis.
What's a good way to do this?
Most programming languages/APIs provide a function, atan2(), which finds the angle and takes the quadrant into consideration. Just use that.
First we would like to find the equation of the straight line that connects the two points:
Let p = (x0,y0) be the second point.
if x=0 than the answer is 90 deg.
otherwise let m be y0/x0.
y = m(x-x0) +y0
tg^-1 (that is arctg) of m is the angle.
also note that if (x0,y0) == (0,0) than the angle is undefined

formula for best approximation for center of 2D rotation with small angles

This is not a homework. I am asking to see if problem is classical (trivial) or non-trivial. It looks simple on a surface, and I hope it is truly a simple problem.
Have N points (N >= 2) with
coordinates Xn, Yn on a surface of
2D solid body.
Solid body has some small rotation (below Pi/180)
combined with small shifts (below 1% of distance between any 2 points of N). Possibly some small deformation too (<<0.001%)
Same N points have new coordinates named XXn, YYn
Calculate with best approximation the location of center of rotation as point C with coordinates XXX, YYY.
Thank you
If you know correspondence (i.e. you know which points are the same before and after the transformation), and you choose to allow scaling, then the problem is a set of linear equations. If you have 2 or more points then you can find a least-squares solution with little difficulty.
For initial points (xi,yi) and transformed points (xi',yi') you have equations of the form
xi' = a xi + b yi + c
yi' =-b xi + a yi + d
which you can rearrange into a linear system
A x = y
where
A = | x1 y1 1 0 |
| y1 -x1 0 1 |
| x2 y2 1 0 |
| y2 -x2 0 1 |
| ... |
| xn yn 1 0 |
| yn -xn 0 1 |
x = | a |
| b |
| c |
| d |
y = | x1' |
| y1' |
| x2' |
| y2' |
| ... |
| xn' |
| yn' |
the standard "least-squares" form of which is
A^T A x = A^T y
and has the solution
x = (A^T A)^-1 A^T y
with A^T as the transpose of A and A^-1 as the inverse of A. Normally you would use an SVD or QR decomposition to compute the solution as they ought to be more stable and less computationally intensive than the inverse.
Once you've found x (and so the four elements of the transformation a, b, c and d) then the various elements of the transformation are given by
scale = sqrt(a*a+b*b)
rotation = atan2(b,a)
translation = (c,d)/scale
If you don't include scaling then the system is non-linear, and requires an iterative solution (but isn't too difficult to solve). If you do not know correspondence then the problem is substantially harder, for small transformations something like iterated closest point works, for large transformations it's a lot harder.
Edit: I forgot to include the centre of rotation. A rotation theta about an arbitrary point p is a sequence
translate(p) rotate(theta) translate(-p)
if you expand it all out as an affine transformation (essentially what we have above) then the translation terms come to
dx = px - cos(theta)*px + sin(theta)*py
dy = py - sin(theta)*px - cos(theta)*py
we know theta (rotation), dx (c) and dy (d) from the equations above. With a little bit of fiddling we can solve for px and py
px = 0.5*(dx - sin(theta)*dy/(1-cos(theta)))
py = 0.5*(dy + sin(theta)*dx/(1-cos(theta)))
You'll notice that the equations are undefined if theta is zero, because there is no centre of rotation when no rotation is performed.
I think I have all that correct, but I don't have time to double check it all right now.
Look up the "Kabsch Algorithm". It is a general-purpose algorithm for creating rotation matrices using N known pairs. Kabsch created it to assist denoising stereo photographs. You rotate a feature in picture A to picture B, and if it is not in the target position, the feature is noise.
http://en.wikipedia.org/wiki/Kabsch_algorithm
See On calculating the finite centre of rotation for
rigid planar motion for a relatively simple solution. I say "relatively simple" because it still uses things like psuedo-inverses and SVD (singular value decomposition). And here's a wikipedia article on Instant centre of rotation. And another paper: ESTIMATION OF THE FINITE CENTER OF ROTATION IN PLANAR MOVEMENTS.
If you can handle stiffer stuff, try Least Squares Estimation of Transformation Parameters Between Two Point Patterns.
First of all, the problem is non-trivial.
A "simple" solition. It works best when the polygon resembles circle, and points are distributed evenly.
iterate through N
For both old and new dataset, find the 2 farthest points of the point N.
So now you have the triangle before and after the transformation. Use the clockwise direction from the center of each triangle to number its vertices as [0] (=the N-th point in the original dataset), [1], and [2] (the 2 farthest points).
Calculate center of rotation, and deformation (both x and y) of this triangle. If the deformation is more then your 0.001% - drop the data for this triangle, otherwise save it.
Calculate the average for the centers of rotation.
The right solution: define the function Err(Point BEFORE[N], Point AFTER[N], double TFORM[3][3]), where BEFORE - constant old data points, AFTER - constant new data points, TFORM[3][3] affine transformation matrix, Err(...) function that returns the scalar error value, 0.0 when the TFORM translated BEFORE to exact AFTER, or some >0.0 error value. Then use any numeric math you want to find the minimum of the Err(TFORM): e.g. gradient search.
Calculate polygon centers O1 and O2. Determine line formulae for O1 with (X0, Y0) and O2 with (XX0, YY0). Find intersection of lines to get C.
If I understand your problem correctly, this could be solved in this way:
find extremities (furthest points, probably on several axises)
scale either one to match
their rotation should now be trivial (?)
Choose any 2 points on the body, P1, P2, before and after rotation. Find vectors between these before and after points. Cross these vectors with a vector normal to the plane of rotation. This results in two new vectors, the intersection of the lines formed by the initial points and these two new vectors is the center of the rotation.
{
if P1after = P1before return P1after
if P2after = P2before return P2after
Vector V1 = P1after - P1before
Vector V2 = P2after - P2before
normal = Vn // can be messy to create for arbitrary 3d orientation but is simple if you know orientation, for instance, normal = (0,0,1) for an object in the x,y plane)
Vector VL1 = V1 x Vn //Vector V1 cross product with Vn
Vector VL2 = V2 x Vn
return intersectLines(P1after,VL1,P2after,VL2) //Center of rotation is intersection of two lines
}
intersectLines(Point P1, Vector V1, Point P2, Vector V2)
{
//intersect two lines using point, direction form of a line
//returns a Point
}

Resources