How to rotate a Vector3 using Vector2? - vector

I want to simulate particles driven by wind on a three.js globe. The data I have is a Vector3 for the position of a particle and a Vector2 indicating wind speed and direction, think North/East. How do I get the new Vector3?
I've consulted numerous examples and read the documentation and believe the solution involves quaternions, but the axis of rotation is not given. Also, there are thousands of particles, it should be fast, however real-time is not required.
The radius of the sphere is 1.

I would recommend you have a look at the Spherical class provided by three.js. Instead of cartesian coordinates (x,y,z), a point is represented in terms of a spherical coordinate-system (θ (theta), φ (phi), r).
The value of theta is the longitude and phi is the latitude for your globe (r - sphereRadius would be the height above the surface). Your wind-vectors can then be interpreted as changes to these two values. So what I would try is basically this:
// a) convert particle-location to spherical
const sphericalPosition = new THREE.Spherical()
.setFromVector3(particle.position);
// b) update theta/phi (note that windSpeed is assumed to
// be given in radians/time, but for a sphere of size 1 that
// shouldn't make a difference)
sphericalPosition.theta += windSpeed.x; // east-direction
sphericalPosition.phi += windSpeed.y; // north-direction
// c) write back to particle-position
particle.position.setFromSpherical(sphericalPosition);
Performance wise this shouldn't be a problem at all (maybe don't create a new Spherical-instance for every particle like I did above). The conversions involve a bit of trigonometry, but we're talking just thousands of points, not millions.
Hope that helps!

If you just want to rotate a vector based on an angle, just perform a simple rotation of values on the specified plane yourself using trig as per this page eg for a rotation on the xz plane:
var x = cos(theta)*vec_to_rotate.x - sin(theta)*vec_to_rotate.z;
var z = sin(theta)*vec_to_rotate.x + cos(theta)*vec_to_rotate.z;
rotated_vector = new THREE.Vector3(x,vec_to_rotate.y,z);
But to move particles with wind, you're not really rotating a vector, you should be adding a velocity vector, and it 'rotates' its own heading based on a combination of initial velocity, inertia, air friction, and additional competing forces a la:
init(){
position = new THREE.Vector(0,0,0);
velocity = new THREE.Vector3(1,0,0);
wind_vector = new THREE.Vector3(0,0,1);
}
update(){
velocity.add(wind_vector);
position.add(velocity);
velocity.multiplyScalar(.95);
}
This model is truer to how wind will influence a particle. This particle will start off heading along the x axis, and then 'turn' eventually to go in the direction of the wind, without any rotation of vectors. It has a mass, and a velocity in a direction, a force is acting on it, it turns.
You can see that because the whole velocity is subject to friction (the multscalar), our initial velocity diminishes as the wind vector accumulates, which causes a turn without performing any rotations. Thought i'd throw this out just in case you're unfamiliar with working with particle systems and maybe were just thinking about it wrong.

Related

Understanding Angular velocities and their application

I recently had to convert euler rotation rates to vectorial angular velocity.
From what I understand, in a local referential, we can express the vectorial angular velocity by:
R = [rollRate, pitchRate, yawRate] (which is the correct order relative to the referential I want to use).
I also know that we can convert angular velocities to rotations (quaternion) for a given time-step via:
alpha = |R| * ts
nR = R / |R| * sin(alpha) <-- normalize and multiply each element by sin(alpha)
Q = [nRx i, nRy j, nRz k, cos(alpha)]
When I test this for each axis individually, I find results that I totally expect (i.e. 90°pitch/time-unit for 1 time unit => 90° pitch angle).
When I use two axes for my rotation rates however, I don't fully understand the results:
For example, if I use rollRate = 0, pitchRate = 90, yawRate = 90, apply the rotation for a given time-step and convert the resulting quaternion back to euler, I obtain the following results:
(ts = 0.1) Roll: 0.712676, Pitch: 8.96267, Yaw: 9.07438
(ts = 0.5) Roll: 21.058, Pitch: 39.3148, Yaw: 54.9771
(ts = 1.0) Roll: 76.2033, Pitch: 34.2386, Yaw: 137.111
I Understand that a "smooth" continuous rotation might change the roll component mid way.
What I don't understand however is after a full unit of time with a 90°/time-unit pitchRate combined with a 90°/time-unit yawRate I end up with these pitch and yaw angles and why I still have roll (I would have expected them to end up at [0°, 90°, 90°].
I am pretty confident on both my axis + angle to quaternion and on my quaternion to euler formulas as I've tested these extensively (both via unit-testing and via field testing), I'm not sure however about the euler rotation rate to angular-velocity "conversion".
My first bet would be that I do not understand how euler rotation-rates axes interacts on themselves, my second would be that this "conversion" between euler rotation-rates and angular velocity vector is incorrect.
Euler angles are not good way of representing arbitrary angular movement. Its just a simplification used for graphics,games and robotics. They got some pretty hard restrictions like your rotations consist of only N perpendicular axises in ND space. That is not how rotation works in real world. On top of this spherical representation of reper endpoint it creates a lot of singularities (you know when you cross poles ...).
The rotation movement is analogy for translation:
position speed acceleration
pos = Integral(vel) = Integral(Integral(acc))
ang = Integral(omg) = Integral(Integral(eps))
That in some update timer can be rewritten to this:
vel+=acc*dt; pos+=vel*dt;
omg+=eps*dt; ang+=omg*dt;
where dt is elapsed time (Timer interval).
The problem with rotation is that you can not superimpose it like translation. As each rotation has its own axis (and it does not need to be axis aligned, nor centered) and each rotation affect the axis orientation of all others too so the order of them matters a lot. On top of all this there is also gyroscopic moment creating 3th rotation from any two that has not parallel axis. Put all of this together and suddenly you see Euler angles does not match the real geometrics/physics of rotation. They can describe orientation and fake its rotation up to a degree but do not expect to make real sense once used for physic simulation.
The real simulation would require list of rotations described by the axis (not just direction but also origin), angular speed (and its change) and in each simulation step the recomputing of the axis as it will change (unless only single rotation is present).
This can be done by using coumulative homogenous transform matrices along with incremental rotations.
Sadly the majority of programmers prefers Euler angles and Quaternions simply by not knowing that there are better and simpler options and once they do they stick to Euler angles anyway as matrix math seem to be more complicated to them... That is why most nowadays games have gimbal locks, major rotation errors and glitches, unrealistic physics.
Do not get me wrong they still have their use (liek for example restrict free look for camera etc ... but they missused for stuff they are the worse option to use for.

A* orientation discretization

I have a space with obstacles I wish to find a path through. What I can do is discretize the space into a grid and use A* (or D* or whatever) to find a path through it. I wish to now add orientation to the algorithm. So the node location now becomes a 3d vector (x, y, phi). You can go from one node to another one only if they belong to an arc (both positions are on a circle and are oriented along the tangent lines). How do I discretize the space so that angles don't explode in a sense that by traversing the graph, the set of possible angles becomes finite?
Thanks.
As I understand it, your challenge is not to discretize coordinate, but to discretize the headings. I had to do the same thing in a grid world that allowed movement in eight directions, i.e. horizontal, vertical and diagonal. Your discretized space should match the problem domain. For your consideration:
4-directions: use a square grid with movement across edges
8-directions use a square grid with movement across edges and vertices
6-directions use a hexagonal grid with movement across edges
12-directions use a hexagonal grid with movement across edges and points
... and so on.
To actually get the discretized headings, I declared an enum called Direction:
public enum Direction {
North,
NorthEast,
East,
SouthEast,
South,
SouthWest,
West,
NorthWest;
//additional code below...
}
You can lookup the correct heading by first computing the XY-offset between the current position and goal position:
int dx = currentPosition.x - goalPosition.x;
int dy = currentPosition.y - goalPosition.y;
These were passed to the getInstance(int,int) method (below) to obtain the correct Direction:
public static Direction getInstance(int dx, int dy) {
int count = Direction.values().length;
double rad = Math.atan2(dy, dx); // In radians
double degree = rad * (180 / Math.PI) + 450;
return getInstance(((int) Math.round(((degree % 360) / (360 / count)))) % count);
}
public static Direction getInstance(int i) {
return Direction.values()[i % Direction.count];
}
In effect, these methods compute the heading in degrees and rounds to the nearest Direction. You can then implement a method that moves/turns the agent in the the Direction heading, e.g. agent.turnToward(Direction d) or agent.move(Direction d).
Additional Resources:
Hexagon grids: http://www.redblobgames.com/grids/hexagons/#distances
Representing grids with graphs: http://www.redblobgames.com/pathfinding/grids/algorithms.html
Pathfinding with A*: http://theory.stanford.edu/~amitp/GameProgramming/
Angles can be prevented from blowing up by ensuring that phi is considered to be modulo 2pi, that is, phi = phi + 2pi*k for any integer value of k.
In C like syntax, you might end up updating phi with fmod.
phi = fmod(phi + deltaphi, 2*pi)
Where deltaphi is the change in angle you're introducing (in radians).
The most common way to do this is to constrain the values of the angle phi to take on one of n discrete angles which also has the advantage of avoiding precision/rounding issues. Given that you know phi can only take on one of n values, you can treat it like an integer, and map the value to a real when necessary.
i = (i + deltai) % n
phi = (2*i*pi)/n)
Where your change in angle deltai is (2*deltai*pi)/n radians.
However, finding a good discretization is only part of the solution - it defines a representation of your configuration space, but as you've pointed out, you also need to consider what a valid transition is.
The simplest approach to integrate angles into planning is to require rotations and translations to be distinct (at any time you can do one or the other, but not both), or to be composable (always translate, and then on arriving instantaneously rotate).
Moving forward and or backward at the same time while you're turning introduces is more complex, and tends to not work particularly well with discrete lattices - it tends to require some model of the vehicle you're working with. The most common are the simple nonholonomic models - the forward only car (the Dubins' car) or the car with forward / reverse (the Reeds Shepp car) - your reference to tangents to circles, I'm guessing this is what you're after. Dubins-Curves, or similar libraries can be used to build libraries of possible paths that could be combined with an A* (or D*) planner.
Differentially Constrained Mobile Robot Motion Planning in State Lattices by Mihail Pivtoraiko, Ross A. Knepper and Alonzo Kelly has some striking images of what's possible.

Gravity's acceleration between two objects

So I am making a program, where you can have two objects (circles). I want them to orbit like planets around each other, but only in 2D.
I know that using Newtons Universal Law of Gravitation I can get the force between the two objects. I also know A = F / M. My question is how would I take the A from the previous equation and change it into a vector?
You need to use vector equations:
// init values (per object)
double ax=0.0,ay=0.0,az=0.0; // acceleration [m/s^2]
double vx=0.0,vy=0.0,vz=0.0; // velocity [m/s]
double x=0.0, y=0.0, z=0.0; // position [m]
double m=1.0; // mass [kg]
// iteration inside some timer (dt [seconds] period) ...
int i; double a,dx,dy,dz; // first compute acceleration
for (ax=0.0,ay=0.0,az=0.0,i=0;i<obj.num;i++)
if (obj[i]!=this) // ignore gravity from itself
{
dx=obj[i].x-x;
dy=obj[i].y-y;
dz=obj[i].z-z;
a=sqrt((dx*dx)+(dy*dy)+(dz*dz)); // a=distance to obj[i]
a=6.67384e-11*(obj[i].m*m)/(a*a*a); // a=acceleration/distance to make dx,dy,dz unit vector
ax+=a*dx; // ax,ay,az = actual acceleration vector (integration)
ay+=a*dy;
az+=a*dz;
}
vx+=ax*dt; // update speed via integration of acceleration
vy+=ay*dt;
vz+=az*dt;
x+=vx*dt; // update position via integration of velocity
y+=vy*dt;
z+=vz*dt;
Code is taken from here
obj[] is list of all your objects
obj.num is the count of them
I recommend to create object class with all the variables inside (ax,ay,az,...m), init them once and then continuously update (iteration) in some timer. If you want more accuracy then you should compute ax,ay,az for all objects first and only after update speed and position (to avoid change of position of objects during gravity computation). If you want to drive an object (like with truster) then just add its acceleration to ax,ay,az vector)
Now to setup an orbit just:
place planet object
must be massive enough and also set its position / velocity to what you want
place satellite
Initial position should be somewhere near planet. It should not be too massive. Init also speed vector with tangent direction to orbiting trajectory. If speed is too low it will collapse into planet and if speed is too high it will escape from planet otherwise will be orbiting (circle or ellipse)
timer
lower the interval better the simulation usually 10ms is OK but for massive and far objects is also 100ms and more OK. If you want particles or something then use 1ms (very dynamic sceene).
I strongly recommend to read this related QA:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
especially [edit3] about the integration precision and creating orbital data.
With two objects you are probably best using an ellipse which is the path the objects will follow about their common center of mass. Read Kepler's laws of planetary motion which gives the background.
If one object is a much greater mass than the other, i.e. a sun and a planet you can have one stationary and the other taking an elliptical path. The equation of the ellipse is given by
r = K e / ( 1 + e cos(theta))
K is a constant giving the size and e is the eccentricity. If you want an elliptical orbit have 0 < e < 1 the smaller it is the more circular the orbit. To get x, y coordinates from this use, x = r cos(theta), y = r sin(theta). The missing bit is time and how the angle is dependant on time. This is where the second and third laws come in. If a and b are the semi-major and semi-minor lengths of the ellipse, and P is the period then
0.5 * P * r^2 theta'= pi a b
theta' is the rate of change of angle with respect to time (d theta/d t). You can use this to get how much theta will change given a increase in time. First work out the current radius r0 given the current angle th0 if the time increment is δt then the angle increment δtheta is
δtheta = 2 pi * a * b / r^2 * δt
and the next angle is th0 + δtheta.
If the masses are of similar magnitude then see two body problem. Both objects will have elliptical orbits, there are two patterns which you can see in animations on that page. The ellipses will follow the same formula as above with the focus at the common center of mass.
If you have three object things get considerably harder and there are not generally neat solutions. See three body problem for this.

Aligning a point cloud on a grid

I have to measure the Z-distances for corresponding points of two clouds.
I intend to iterate through one cloud and calculate the distance bezween Z coordinates using the same X and Y of the other cloud.
Unfortunatelly it doesn't work, as there are never a point at these X-Y coordinates in the second cloud. My current workaround is to search for a closest point in the second cloud for X-Y of the first cloud. It works, but it is very slow.
Is there a way to align points of X and Y coordinates on a defined grid using PCL? This way I hope the X-Y coortinates will match better.
EDIT
Ok, here are some images and more explanation.
Top view
Side view
There is a scan of a saddle and a horse back. Both are made independently but aligned in Z-axis - Z-Axis of both are parralel.
I want to create a model of a layer, which fits exactly under the saddle (Not just a rechtangular pad).
So given a thickness of the layer I want to iterate through the saddle points and find the Z-distance to corresponding point on the horse-back. As the Y coordinates are floats, there are nearly never a point on the horse with the same XY as on the saddle.
I think. If I could align all points to a grid with a given density, there would be a corresponding XY-point on tthe horse for each XY saddle point above it.
I am not really sure if that is what you mean, but maybe the "grid" you are talking about could just be the image plane? So instead of using the 3D point cloud you could take the depth maps/depth images and just compare the values of two depth maps at the same image coordinates. This would assume that the recordings are already aligned.
If you only have the point cloud data you'd have to perform a projection on the plane (for this you's have to know the intrinsics of the camera).
Another option might be aligning the clouds using a registration method (e.g. ICP). Then you could also get the (sum of) distance(s) for corresponding points of the clouds.
I've implemented a proof of concept and want to share it. However, I'd appreciate a "proper" solution - a PCL API function probably.
bool alignToGrid( pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud, QMap<QString, float > & grid, int density )
{
pcl::PointXYZRGBNormal p1;
p1.r=0;
p1.g=0;
p1.b=255;
QMap<QString, QList<float> > tmpGridMap;
for( std::vector<pcl::PointXYZRGBNormal, Eigen::aligned_allocator<pcl::PointXYZRGBNormal> >::iterator it1 = cloud->points.begin();
it1 != cloud->points.end(); it1++ )
{
p1.x = it1->x;
p1.y = it1->y;
p1.z = it1->z;
int gridx = p1.x*density;
int gridy = p1.y*density;
QString pos = QString("%1x%2").arg(gridx).arg(gridy);
tmpGridMap[pos].append(p1.z);
}
for (QMap<QString, QList<float> >::iterator it = tmpGridMap.begin(); it!=tmpGridMap.end(); ++it)
{
float meanZ=0;
foreach( float f, it.value() )
{
meanZ+=f;
}
meanZ /= it.value().size();
grid[it.key()] = meanZ;
}
return true;
}
The Idea is to iterate through a cloud and leave/create only points, which XY coordinates are on the defined grid. Density 1000 for Kinect clouds results in ca. 1mm-grid.
All points around the grid point are used for building the Z-average.
The cloud remains unmodified. The output is a map of xy-position to Z. XY Position is stored in string (weird, I know) as x. Using this map it is easy to find corresponding XY-points in other grid-aligned clouds.
Now I was able to map my clouds using any density. In the images e.g. 1mm and 1cm.

Normal Mapping on procedural sphere

I am a student in video games, and we are working on a raytracer in C++. We are using our teachers' library.
We create procedural objects (in our case a sphere), the Camera sends a ray for each pixel of the screen and the ray send back information on what it hit.
Some of us decided to integrate Normal Maps. So, at first, we sent ray on the object, looked at the value of the Normal map texel where we hit the sphere, converted it in a vector, normalized it and sent it back in place of the normal of the object. The result was pretty good, but of course, it didn't take the orientation of the "face" (it's procedural, so there is no face, but it gives the idea) into account anymore, so the render was flat.
We still don't really know how to "blend" the normal of the texture (in tangent space) and the normal of the object together. Here is our code:
// TGfxVec3 is part of our teachers library, and is a 3d vector like this:
// TGfxVec3( 12.7f, -13.4f, 52.0f )
// The sphere being at the origin and of radius 1, and tHit.m_tPosition being the
// exact position at the surface of the sphere where the ray hit, the normal of this
// point is the position hit by the ray.
TGfxVec3 tNormal = tHit.m_tPosition;
TGfxVec3 tTangent = Vec3CrossProduct( tNormal , m_tAxisZ );
TGfxVec3 tBiNormal = Vec3CrossProduct( tNormal , tTangent );
TGfxVec3 tTextureNorm = 2*(TGfxVec3( pNorm[0], pNorm[1], pNorm[2] )/255)-TGfxVec3( -1.0f, -1.0f, -1.0f );
// pNorm[0], pNorm[1], pNorm[2] are respectively the channels Red, Green,
// and Blue of the Normal Map texture.
// We put them in a 3D vector, divid them by 255 so their value go from 0 to 1,
// multiply them by 2, and then substract a vector, so their rang goes from -1 to +1.
tHit.m_tNorm = TGfxVec3( tTangente.x*tTextNorm.x + tCoTangente.x*tTextNorm.x +
tNorm.x*tTextNorm.x, tTangente.y*tTextNorm.y + tCoTangente.y*tTextNorm.y +
tNorm.y*tTextNorm.y, tTangente.z*tTextNorm.z + tCoTangente.z*tTextNorm.z +
tNorm.z*tTextNorm.z ).Normalize();
// Here, after some research, I came across this : http://www.txutxi.com/?p=316 ,
// that allow us to convert the normal map tangent space to the object space.
The results are still not good. My main concern are the Tangent and Binormals. The Axis taken in reference (here: m_tAxisZ, the Z Axis of the Sphere), is not right. But I don't know what to take, or even if what I am doing is really good. So I came here for help.
So, we finally did it. :D Ok, I will try to be clear. For this, two images :
(1) : http://i.imgur.com/cHwrR9A.png
(2) : http://i.imgur.com/mGPH1RW.png
(My drawing skill has no equal, I know).
So, the main problem was to find the Tangent "T" and the Bi-tangent "B". We already have the Normal "N". Our circle always being at the origin with a radius of 1, a point on its surface is equal to the Normal to that point (black and red vector on the first image). So, we have to find the tangent to that point (in green). For this, we just have to rotate the vector from PI/2 rad :
With N( x, y ) :
T = ( -N.y , N.x )
However, we are in 3D. So the point will not always be at the equator. We can easily solve this problem by ignoring the position in Y of our point and normalize the vector with only the two other component. So, on the second image, we have P (we set its Y value to 0), and we normalize the new vector to get P'.
With P( x, y, z ) :
P' = ( P.x, 0, P.z).Normalize();
Then, we go back to my first message to find the T. Finally, we get the B with a cross product between the N en the T. Finally, we calculate the normal to that point by taking the normal map into account.
With the variable "Map" containing the three channels (RGB) of the normal Map, each one clamped from -1 to 1, and T, N and B all being 3D vectors :
( Map.R*T + Map.G*B + Map.B*N ).Normalize();
And that's it, you have the normal to the point taking your normal map into account. :) Hope this will be usefull for others.
You are mostly right and completely wrong at the same time.
Tangent space normal mapping use a transformation matrix to convert the tangent space normal from the texture to another space, like object or world space, or transform the light in the tangent space to compute the lighting with everything in the same space.
Bi-normal is a common mistake and should be named bi-tangent.
It is sometime possible to compute the TBN at the fly on simple geometry, like on a height-map as it is easy to deduce the tangent and the bi-tangent on a regular grid. But on a sphere, the cross trick with a fixed axis will result to a singularity at the pole where the cross product give a zero length vector.
Last, even if we ignore the pole singularity, the TBN must be normalized before you apply the matrix to the tangent space normal. You may also miss a transpose, as a 3x3 orthonormal matrix inverse is the transpose, and what you need is the inverse of the original TBN matrix if you go from tangent to object.
Because of all this, we most often store the TBN as extra information in the geometry, computed from the texture coordinate ( the url you referenced link to that computation description ) and interpolate at runtime with the other values.
Rem : there is a rough simplification to use the geometry nornal as the TBN normal but there is no reason in the first place that they match.

Resources