Generating movement based on time t for real time ocean waves from an initial spectrum - math

I've spent the last week or so rendering a simple ocean using gerstner waves but having issues with tiling, so I decided to start rendering them "properly" and dip my toes into the murky waters of rendering a heightfield using an iFFT.
There are plenty of papers explaining the basic gist -
1) calculate a frequency spectrum
2) use this to create a heightfield using ifft to convert from frequency domain to spatial domain - animating with time t
Since the beginning of this journey I have learned about things like the complex plane, the complex exponent equation, the FFT in more detail etc but after the initial steps of creating an initial spectrum (rendering a texture full of guassian numbers with mean 0 and sd of 1, filtered by the phillips spectrum) I am still totally lost.
My code for creating the initial data is here (GLSL):
float PhillipsSpectrum(vec2 k){
//kLen is the length of the vector from the centre of the tex
float kLen = length(k);
float kSq = kLen * kLen;
// amp is wave amplitude, passed in as a uniform
float Amp = amplitude;
//L = velocity * velcoity / gravity
float L = (velocity*velocity)/9.81;
float dir = dot(normalize(waveDir),normalize(k));
return Amp * (dir*dir) * exp(-1.0/(kSq * L * L))/ (kSq * kSq) ;
}
void main(){
vec3 sums;
//get screenpos - center is 0.0 and ranges from -0.5 to 0.5 in both
//directions
vec2 screenPos = vec2(gl_FragCoord.x,gl_FragCoord.y)/texSize - vec2(0.5,0.5);
//get random Guass number
vec2 randomGuass = vec2(rand(screenPos),rand(screenPos.yx));
//use phillips spectrum as a filter depending on position in freq domain
float Phil = sqrt(PhillipsSpectrum(screenPos));
float coeff = 1.0/sqrt(2.0);
color = vec3(coeff *randomGuass.x * Phil,coeff * randomGuass.y * Phil,0.0);
}
which creates a texture like this:
Now I am totally lost as how to :
a) derive spectrums in three directions from the initial texture
b) animate this according to time t like mentioned in this paper (https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/OceanCS_Slides.pdf) on slide 5
I might be completely stupid and overlooking something really obvious - I've looked at a bunch of papers and just get lost in formulae even after acquainting myself with their meaning. Please help.

Related

3D Projection Modification - Encode Z/W into Z

This is a little tricky to explain, so bare with me. I'm attempting to design a 2D projection matrix that takes 2D pixel coordinates along with a custom world-space depth value, and converts to clip-space.
The idea is that it would allow drawing elements based on screen coordinates, but at specific depths, so that these elements would interact on the depth buffer with normal 3D elements. However, I want x and y coordinates to remain the same scale at every depth. I only want depth to influence the depth buffer, and not coordinates or scale.
After the vertex shader, the GPU sets depth_buffer=z/w. However, it also scales x/w and y/w, which creates the depth scaling I want to avoid. This means I must make sure my final clip-space w coordinate ends up being 1.0, to avoid those things. I think I could also adopt to scale x and y by w, to cancel out the divide, but I would rather do the former, if possible.
This is the process that my 3D projection matrix uses to convert depth into clip space (d = depth, n = near distance, f = far distance)
z = f/(f-n) * d + f/(f-n) * -n;
w = d;
This is how I would like to setup my 2D projection matrix. Compared to the 3D version, it would divide both attributes by the input depth. This would simulate having z/w encoded into just the z value.
z = ( f/(f-n) * d + f/(f-n) * -n ) / d;
w = d / d;
I think this turns into something like..
r = f/(f-n); // for less crazy math
z = r + ( r * -n ) / d;
w = 1.0;
However, I can't seem to wrap my math around the values that I would need to plug into my matrix to get this result. It looks like I would need to set my matrix up to perform a division by depth. Is that even possible? Can anyone help me figure out the values I need to plug into my matrix at m[2][2] and m[3][2] (m._33 and m._43) to make something like this happen?
Note my 3D projection matrix uses the following properties to generate the final z value:
m._33 = f / (f-n); // depth scale
m._43 = -(f / (f-n)) * n; // depth offset
Edit: After thinking about this a little more, I realized that the rate of change of the depth buffer is not linear, and I'm pretty sure a matrix can only perform linear change when its input is linear. If that is the case, then what I'm trying to do wouldn't be possible. However, I'm still open to any ideas that are in the same ball park, if anyone has one. I know that I can get what I want by simply doing pos.z /= pos.w; pos.w = 1; in the vertex shader, but I was really hoping to make it all happen in the projection matrix, if possible.
In case anyone is attempting to do this, it cannot be done. Without black magic, there is apparently no way to divide values with a matrix, unless of course the diviser is a constant or etc, where you can swap out a scaler with 1/x. I resorted to performing the operation in the shader in the end.

Giving a direction to a moving object

I wish to create a tower defense game in SDL. Before starting the project, I experiment everything I will need to do when programming the game. In the test I am doing currently, there are a tower (static object), targets (moving objects) that are in its range, and shoots (moving objects) that are fired from the turret to the targets. What I fail to do is find a way to give the 'shoot' objects a direction. By shoot object, I mean the object that is fired by the tower when targets are in range. Also, whatever the direction is, the shoot shall always have the same speed, which forbids the use of the formula dirx = x2 - x1.
Shoots are structures defined as the following:
typedef struct shoot
{
SDL_Surface *img; // Visual representation of the shoot object.
SDL_Rect pos; // Position of the object, it's a structure containing
// coordinates x and y (these are of type int).
int index;
float dirx; // dirx and diry are the movement done by the shoots object in
// 1 frame, so that for each frame, the object shoot is moved dirx
// pixels on the axis x and diry pixels on the axis y
float diry; // (the program deals with the fact that the movement will be done
// with integers and not with floats, that is no problem to me)
float posx; // posx and posy are the real, precise coordinates of the shoot
float posy;
struct shoot *prev;
struct shoot *next;
} shoot;
What I need is a way to calculate the position of the object shoot in the next frame, given its position and direction in the current frame.
This is the best I could find (please note that it is a paper written formula, so the names are simplified, different from the names in the code):
dirx = d * ((p2x - p1x) / ((p2x - p1x) + (p2y - p1y)))
diry = d * ((p2y - p1y) / ((p2x - p1x) + (p2y - p1y)))
dirx and diry correspond to the movement done, in the pixel, by the shoot on the axis x and y, in one frame.
d is a multiplier and the big parenthesis (all of what is not d) is a coefficient.
p2 is the point the shoot shall aim for (the center of the target aimed for). p1 is the current position of the shoot object. x or y means that we use the coordinate x or y of the point.
The problem with this formula is that it gives me an unexact value. For example, aiming in diagonal will make the shoot slower that aiming straight north. Moreover, it doesn't go in the right direction, and I can't find why since my paper tests show me I'm right...
I would love some help here to find a formula that makes the shoot move correctly.
If p1 is the source of a shoot object, p2 is the destination, and s is the speed you want to move it at (units per frame or per second - latter is better), then the velocity of the object is given by
float dx = p2.x - p1.x, dy = p2.y - p1.y,
inv = s / sqrt(dx * dx + dy * dy);
velx = inv * dx; vely = inv * dy;
(You should probably change dir to vel as it is a more sensible variable name)
Your attempt seems to be normalizing the direction vector by Manhattan distance, which is wrong - you must normalize by the Euclidean distance, which is given by the sqrt term.

How to calculate the angles of the projection in 3d for an object to step at given point?

I need to calculate the angles to through the ball in that direction for a given speed and the point where it should land after thrown.
The horizontal angle is easy(We know both start and step points).How to calculate the vertical angle of projection.There is gravy applying on object.
Time of travel will be usual as bowling time(time between ball release and occurring step) as per video.
Is there a way directly in unity3d?
Watch this video for 8 seconds for clear understating of this question.
According to the Wikipedia page Trajectory of a projectile, the "Angle of reach" (The angle you want to know) is calculated as follows:
θ = 1/2 * arcsin(gd/v²)
In this formula, g is the gravitational constant 9.81, d is the distance you want the projectile to travel, and v is the velocity at which the object is thrown.
Code to calculate this could look something like this:
float ThrowAngle(Vector3 destination, float velocity)
{
const float g = 9.81f;
float distance = Vector3.Distance(transform.position, destination);
//assuming you want degrees, otherwise just drop the Rad2Deg.
return Mathf.Rad2Deg * (0.5f * Asin((g*distance)/Mathf.Pow(velocity, 2f)));
}
This will give you the angle assuming no air resistance etc. exist in your game.
If your destination and your "throwing point" are not at the same height, you may want to set both to y=0 first, otherwise, errors may occur.
EDIT:
Considering that your launch point is higher up than the destination, this formula from the same page should work:
θ = arctan(v² (+/-) √(v^4-g(gx² + 2yv²))/gx)
Here, x is the range, or distance, and y is the altitude (relative to the launch point).
Code:
float ThrowAngle(Vector3 start, Vector3 destination, float v)
{
const float g = 9.81f;
float xzd = Mathf.Sqrt(Mathf.Pow(destination.x - start.x, 2) + Mathf.Pow(destination.z - start.z, 2));
float yd = destination.y - start.y;
//assuming you want degrees, otherwise just drop the Rad2Deg. Split into two lines for better readability.
float sqrt = (Mathf.Pow(v,4) - g * (g*Mathf.Pow(xzd,2) + 2*yd*Mathf.Pow(v,2))/g*xzd);
//you could also implement a solution which uses both values in some way, but I left that out for simplicity.
return Mathf.Atan(Mathf.Pow(v, 2) + sqrt);
}

Perspective projection - help me duplicate blender's default scene

I'm currently attempting to teach myself perspective projection, my reference is the wikipedia page on the subject here: http://en.wikipedia.org/wiki/3D_projection#cite_note-3
My understanding is that you take your object to be project it and rotate and translate it in to "camera space", such that your camera is now assumed to be origin looking directly down the z axis. (This matrix op from the wikipedia page: http://upload.wikimedia.org/math/5/1/c/51c6a530c7bdd83ed129f7c3f0ff6637.png)
You then project your new points in to 2D space using this equation: http://upload.wikimedia.org/math/6/8/c/68cb8ee3a483cc4e7ee6553ce58b18ac.png
The first step I can do flawlessly. Granted I wrote my own matrix library to do it, but I verified it was spitting out the right answer by typing the results in to blender and moving the camera to 0,0,0 and checking it renders the same as the default scene.
However, the projection part is where it all goes wrong.
From what I can see, I ought to be taking the field of view, which by default in blender is 28.842 degrees, and using it to calculate the value wikipedia calls ez, by doing
ez = 1 / tan(fov / 2);
which is approximately 3.88 in this case.
I should then for every point be doing:
x = (ez / dz) * dx;
y = (ez / dz) * dy;
to get x and y coordinates in the range of -1 to 1 which I can then scale appropriately for the screen width.
However, when I do that my projected image is mirrored in the x axis and in any case doesn't match with the cube blender renders. What am I doing wrong, and what should I be doing to get the right projected coordinates?
I'm aware that you can do this whole thing with one matrix op, but for the moment I'm just trying to understand the equations, so please just stick to the question asked.
From what you say in your question it's unclear whether you're having trouble with the Projection matrix or the Model matrix.
Like I said in my comments, you can Google glFrustum and gluLookAt to see exactly what these matrices look like. If you're familiar with matrix math (and it looks like you are), you will then understand how the coordinates are transformed into a 2D perspective.
Here is some sample OpenGL code to make the View and Projection Matrices and Model matrix for a simple 30 degree rotation about the Y axis so you can see how the components that go into these matrices are calculated.
// The Projection Matrix
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
near = -camera.viewPos.z - shapeSize * 0.5;
if (near < 0.00001)
near = 0.00001;
far = -camera.viewPos.z + shapeSize * 0.5;
if (far < 1.0)
far = 1.0;
radians = 0.0174532925 * camera.aperture / 2; // half aperture degrees to radians
wd2 = near * tan(radians);
ratio = camera.viewWidth / (float) camera.viewHeight;
if (ratio >= 1.0) {
left = -ratio * wd2;
right = ratio * wd2;
top = wd2;
bottom = -wd2;
} else {
left = -wd2;
right = wd2;
top = wd2 / ratio;
bottom = -wd2 / ratio;
}
glFrustum (left, right, bottom, top, near, far);
// The View Matrix
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
gluLookAt (camera.viewPos.x, camera.viewPos.y, camera.viewPos.z,
camera.viewPos.x + camera.viewDir.x,
camera.viewPos.y + camera.viewDir.y,
camera.viewPos.z + camera.viewDir.z,
camera.viewUp.x, camera.viewUp.y ,camera.viewUp.z);
// The Model Matrix
glRotatef (30.0, 0.0, 1.0, 0.0);
You'll see that glRotate actually does a quaternion rotation (angle of rotation plus a vector about which to do the rotation).
You could also do separate rotations about the X, Y and Z axis.
There's lot's of information on the web about how to form 4X4 matrices for rotations, translations and scales. If you do each of these separately, you'll need to multiply them to get the Model matrix. e.g:
If you have 4X4 matrices rotateX, rotateY, rotateZ, translate, scale, you might form your Model matrix by:
Model = scale * rotateX * rotateZ * rotateY * translate.
Order matters when you form the Model matrix. You'll get different results if you do the multiplication in a different order.
If your object is at the origin, I doubt you want to also put the camera at the origin.

I've got my 2D/3D conversion working perfectly, how to do perspective

Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math.
Although its a 2.5D world, lets pretend its just 2d for this question.
// xa: x-accent, the x coordinate of the projection
// mapP: a coordinate on a map which need to be projected
// _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection
xa = mapP.x * xDistX + mapP.y * xDistY;
ya = mapP.x * yDistX + mapP.y * yDistY;
xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity).
x-axis-angle = atan(yDistX/xDistX)
y-axis-angle = atan(yDistY/yDistY)
a "normal" coordinate system like this
--------------- x
|
|
|
|
|
y
has values like this:
xDistX = 1;
yDistX = 0;
xDistY = 0;
YDistY = 1;
So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down.
When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this).
So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective.
I wanted to add some perspective to this grid so i added some extra's like this:
camera = new MapPoint(60, 60);
dx = mapP.x - camera.x; // delta x
dy = mapP.y - camera.y; // delta y
dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera
fac = 1 - dist / 100; // this formula determines the amount of perspective
xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ;
ya = fac * (mapP.x * yDistX + mapP.y * yDistY );
Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y).
For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this.
( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)
The function you've defined doesn't have an inverse. Just as an example, as user207422 already pointed out anything that's 100 units away from the camera will get mapped to (xa,ya)=(0,0), so the inverse isn't uniquely defined.
More importantly, that's not how you calculate perspective. Generally the perspective scaling factor is defined to be viewdist/zdist where zdist is the perpendicular distance from the camera to the object and viewdist is a constant which is the distance from the camera to the hypothetical screen onto which everything is being projected. (See the diagram here, but feel free to ignore everything else on that page.) The scaling factor you're using in your example doesn't have the same behaviour.
Here's a stab at trying to convert your code into a correct perspective calculation (note I'm not simplifying to 2D; perspective is about projecting three dimensions to two, trying to simplify the problem to 2D is kind of pointless):
camera = new MapPoint(60, 60, 10);
camera_z = camera.x*zDistX + camera.y*zDistY + camera.z*zDistz;
// viewdist is the distance from the viewer's eye to the screen in
// "world units". You'll have to fiddle with this, probably.
viewdist = 10.0;
xa = mapP.x*xDistX + mapP.y*xDistY + mapP.z*xDistZ;
ya = mapP.x*yDistX + mapP.y*yDistY + mapP.z*yDistZ;
za = mapP.x*zDistX + mapP.y*zDistY + mapP.z*zDistZ;
zdist = camera_z - za;
scaling_factor = viewdist / zdist;
xa *= scaling_factor;
ya *= scaling_factor;
You're only going to return xa and ya from this function; za is just for the perspective calculation. I'm assuming the the "za-direction" points out of the screen, so if the pre-projection x-axis points towards the viewer then zDistX should be positive and vice-versa, and similarly for zDistY. For a trimetric projection you would probably have xDistZ==0, yDistZ<0, and zDistZ==0. This would make the pre-projection z-axis point straight up post-projection.
Now the bad news: this function doesn't have an inverse either. Any point (xa,ya) is the image of an infinite number of points (x,y,z). But! If you assume that z=0, then you can solve for x and y, which is possibly good enough.
To do that you'll have to do some linear algebra. Compute camera_x and camera_y similar to camera_z. That's the post-transformation coordinates of the camera. The point on the screen has post-tranformation coordinates (xa,ya,camera_z-viewdist). Draw a line through those two points, and calculate where in intersects the plane spanned by the vectors (xDistX, yDistX, zDistX) and (xDistY, yDistY, zDistY). In other words, you need to solve the equations:
x*xDistX + y*xDistY == s*camera_x + (1-s)*xa
x*yDistX + y*yDistY == s*camera_y + (1-s)*ya
x*zDistX + y*zDistY == s*camera_z + (1-s)*(camera_z - viewdist)
It's not pretty, but it will work.
I think that with your post i can solve the problem. Still, to clarify some questions:
Solving the problem in 2d is useless indeed, but this was only done to make the problem easier to grasp (for me and for the readers here). My program actually give's a perfect 3d projection (i checked it with 3d images rendered with blender). I did left something out about the inverse function though. The inverse function is only for coordinates between 0..camera.x * 0.5 and 0.. camera.y*0.5. So in my example between 0 and 30. But even then i have doubt's about my function.
In my projection the z-axis is always straight up, so to calculate the height of an object i only used the vieuwingangle. But since you cant actually fly or jumpt into the sky everything has only a 2d point. This also means that when you try to solve the x and y, the z really is 0.
I know not every funcion has an inverse, and some functions do, but only for a particular domain. My basic thought in this all was... if i can draw a grid using a function... every point on that grid maps to exactly one map-point. I can read the x and y coordinate so if i just had the correct function i would be able to calculate the inverse.
But there is no better replacement then some good solid math, and im very glad you took the time to give a very helpfull responce :).

Resources