I can't figure out how to perform matrix rotation using Quaternion while taking into account pivot position in OpenGL.What I am currently getting is rotation of the object around some point in the space and not a local pivot which is what I want.
Here is the code [Using Java]
Quaternion rotation method:
public void rotateTo3(float xr, float yr, float zr) {
_rotation.x = xr;
_rotation.y = yr;
_rotation.z = zr;
Quaternion xrotQ = Glm.angleAxis((xr), Vec3.X_AXIS);
Quaternion yrotQ = Glm.angleAxis((yr), Vec3.Y_AXIS);
Quaternion zrotQ = Glm.angleAxis((zr), Vec3.Z_AXIS);
xrotQ = Glm.normalize(xrotQ);
yrotQ = Glm.normalize(yrotQ);
zrotQ = Glm.normalize(zrotQ);
Quaternion acumQuat;
acumQuat = Quaternion.mul(xrotQ, yrotQ);
acumQuat = Quaternion.mul(acumQuat, zrotQ);
Mat4 rotMat = Glm.matCast(acumQuat);
_model = new Mat4(1);
scaleTo(_scaleX, _scaleY, _scaleZ);
_model = Glm.translate(_model, new Vec3(_pivot.x, _pivot.y, 0));
_model =rotMat.mul(_model);//_model.mul(rotMat); //rotMat.mul(_model);
_model = Glm.translate(_model, new Vec3(-_pivot.x, -_pivot.y, 0));
translateTo(_x, _y, _z);
notifyTranformChange();
}
Model matrix scale method:
public void scaleTo(float x, float y, float z) {
_model.set(0, x);
_model.set(5, y);
_model.set(10, z);
_scaleX = x;
_scaleY = y;
_scaleZ = z;
notifyTranformChange();
}
Translate method:
public void translateTo(float x, float y, float z) {
_x = x - _pivot.x;
_y = y - _pivot.y;
_z = z;
_position.x = _x;
_position.y = _y;
_position.z = _z;
_model.set(12, _x);
_model.set(13, _y);
_model.set(14, _z);
notifyTranformChange();
}
But this method in which I don't use Quaternion works fine:
public void rotate(Vec3 axis, float angleDegr) {
_rotation.add(axis.scale(angleDegr));
// change to GLM:
Mat4 backTr = new Mat4(1.0f);
backTr = Glm.translate(backTr, new Vec3(_pivot.x, _pivot.y, 0));
backTr = Glm.rotate(backTr, angleDegr, axis);
backTr = Glm.translate(backTr, new Vec3(-_pivot.x, -_pivot.y, 0));
_model =_model.mul(backTr);///backTr.mul(_model);
notifyTranformChange();
}
It seems to me you take into account the back and forth translation before and after the rotation already. Why that final call of translateTo?
Besides, when you rotate, a pure rotation is always meant around the origin. So if you want a rotation around your pivot point. I'd exptect to translate your pivot point to the origin, then rotate, then translate back to the pivot would be the right thing to do. Therefore, I'd expect your code to look like this:
_model = Glm.translate(_model, new Vec3(-_pivot.x, -_pivot.y, 0));
_model =rotMat.mul(_model);//_model.mul(rotMat); //rotMat.mul(_model);
_model = Glm.translate(_model, new Vec3(_pivot.x, _pivot.y, 0));
and without the call translateTo(_x, _y, _z);. Also, can you confirm that the rotation part already does what it supposed to? You can check this by comparing rotMat with Glm.rotate(new Mat4(1.0f), angleDegr, axis). They should be the same for the same rotation.
A quaternion describes only a rotation. As a result how do you want to rotate something around a pivot point with only a quaternion?
The minimum that you need is a R3 vector and a quaternion. With only one level of transformation you first roatate the object and then move it there.
If you want to create a matrix you first create the ration matrix and then add the translation unaltered.
If you want to just call glTranslate and glRotate (or glMultMatrix) you would first call glTranslate and then glRoatate.
Edit:
If you are not rendering and just want to know where each vertex is:
Vector3 newVertex = quat.transform(oldVertex) + translation;
i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.
I have a program that I'm making with others and I ran into a problem. I'm working on adding in polygon models into our scene in an XNA window. I have that part complete. I also have bounding spheres(I know I tagged as bounding-box but there is no bounding sphere tag) drawing around each polygon. My problem is when I move the polygons around the 3D space the bounding spheres move twice as much as the polygons. I imagine its something within my polygon matrices that I use to create the bounding sphere that makes it move twice as much but that is only speculation.
So just to clarify I'll give you an example of my problem. If I hold down D to move a polygon along the X axis. (model.position.X--;) The polygon moves as expected to but the bounding sphere around the polygon moves twice as much. Thanks for the help guys!
Here is how I draw the models and the bounding spheres:
public void Draw(Matrix view, Matrix projection, bool drawBoundingSphere)
{
Matrix translateMatrix = Matrix.CreateTranslation(position);
Matrix worldMatrix = translateMatrix * Matrix.CreateScale(scaleRatio);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = worldMatrix * modelAbsoluteBoneTransforms[mesh.ParentBone.Index];
effect.View = view;
effect.Projection = projection;
effect.EnableDefaultLighting();
effect.PreferPerPixelLighting = true;
}
mesh.Draw();
if (drawBoundingSphere)
{
// the mesh's BoundingSphere is stored relative to the mesh itself.
// (Mesh space). We want to get this BoundingSphere in terms of world
// coordinates. To do this, we calculate a matrix that will transform
// from coordinates from mesh space into world space....
Matrix world = modelAbsoluteBoneTransforms[mesh.ParentBone.Index] * worldMatrix;
// ... and then transform the BoundingSphere using that matrix.
BoundingSphere sphere = BoundingSphereRenderer.TransformBoundingSphere(mesh.BoundingSphere, world);
// now draw the sphere with our renderer
BoundingSphereRenderer.Draw(sphere, view, projection);
}
}
And here is the BoundingSphereRenderer Code:
private static VertexBuffer vertexBuffer;
private static BasicEffect effect;
private static int lineCount;
public static void Initialize(GraphicsDevice graphicsDevice, int sphereResolution)
{
// create our effect
effect = new BasicEffect(graphicsDevice);
effect.LightingEnabled = false;
effect.VertexColorEnabled = true;
// calculate the number of lines to draw for all circles
lineCount = (sphereResolution + 1) * 3;
// we need two vertices per line, so we can allocate our vertices
VertexPositionColor[] vertices = new VertexPositionColor[lineCount * 2];
// compute our step around each circle
float step = MathHelper.TwoPi / sphereResolution;
// used to track the index into our vertex array
int index = 0;
//create the loop on the XY plane first
for (float angle = 0f; angle < MathHelper.TwoPi; angle += step)
{
vertices[index++] = new VertexPositionColor(new Vector3((float)Math.Cos(angle), (float)Math.Sin(angle), 0f), Color.Blue);
vertices[index++] = new VertexPositionColor(new Vector3((float)Math.Cos(angle + step), (float)Math.Sin(angle + step), 0f), Color.Blue);
}
//next on the XZ plane
for (float angle = 0f; angle < MathHelper.TwoPi; angle += step)
{
vertices[index++] = new VertexPositionColor(new Vector3((float)Math.Cos(angle), 0f, (float)Math.Sin(angle)), Color.Red);
vertices[index++] = new VertexPositionColor(new Vector3((float)Math.Cos(angle + step), 0f, (float)Math.Sin(angle + step)), Color.Red);
}
//finally on the YZ plane
for (float angle = 0f; angle < MathHelper.TwoPi; angle += step)
{
vertices[index++] = new VertexPositionColor(new Vector3(0f, (float)Math.Cos(angle), (float)Math.Sin(angle)), Color.Green);
vertices[index++] = new VertexPositionColor(new Vector3(0f, (float)Math.Cos(angle + step), (float)Math.Sin(angle + step)), Color.Green);
}
// now we create the vertex buffer and put the vertices in it
vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionColor), vertices.Length, BufferUsage.WriteOnly);
vertexBuffer.SetData(vertices);
}
public static void Draw(this BoundingSphere sphere, Matrix view, Matrix projection)
{
if (effect == null)
throw new InvalidOperationException("You must call Initialize before you can render any spheres.");
// set the vertex buffer
effect.GraphicsDevice.SetVertexBuffer(vertexBuffer);
// update our effect matrices
effect.World = Matrix.CreateScale(sphere.Radius) * Matrix.CreateTranslation(sphere.Center);
effect.View = view;
effect.Projection = projection;
// draw the primitives with our effect
effect.CurrentTechnique.Passes[0].Apply();
effect.GraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, lineCount);
}
public static BoundingSphere TransformBoundingSphere(BoundingSphere sphere, Matrix transform)
{
BoundingSphere transformedSphere;
// the transform can contain different scales on the x, y, and z components.
// this has the effect of stretching and squishing our bounding sphere along
// different axes. Obviously, this is no good: a bounding sphere has to be a
// SPHERE. so, the transformed sphere's radius must be the maximum of the
// scaled x, y, and z radii.
// to calculate how the transform matrix will affect the x, y, and z
// components of the sphere, we'll create a vector3 with x y and z equal
// to the sphere's radius...
Vector3 scale3 = new Vector3(sphere.Radius, sphere.Radius, sphere.Radius);
// then transform that vector using the transform matrix. we use
// TransformNormal because we don't want to take translation into account.
scale3 = Vector3.TransformNormal(scale3, transform);
// scale3 contains the x, y, and z radii of a squished and stretched sphere.
// we'll set the finished sphere's radius to the maximum of the x y and z
// radii, creating a sphere that is large enough to contain the original
// squished sphere.
transformedSphere.Radius = Math.Max(scale3.X, Math.Max(scale3.Y, scale3.Z));
// transforming the center of the sphere is much easier. we can just use
// Vector3.Transform to transform the center vector. notice that we're using
// Transform instead of TransformNormal because in this case we DO want to
// take translation into account.
transformedSphere.Center = Vector3.Transform(sphere.Center, transform);
return transformedSphere;
}
I am trying to make a semicircle with a straight line going from its end point to point at a target. I have tried multiple ways for a day now and can't get it to point exactly at a target position. Here is my progress so far:
I'm trying to get the dark green line to go trough the red point on the yellow line.
Posting the code so far:
vector init = <105.45535, 105.83867, 2239.99976>;
vector init_unit = <-0.54465, 0.83867, 0.00000>;
vector target = <106,104,2241>;
default{
state_entry(){
llListen(-215485231, "", NULL_KEY, "");
}
listen(integer c, string n, key i, string m){
list temp = llParseString2List(m, ["|"], []);
init = (vector)llList2String(temp, 0); //position
init_unit = (vector)llList2String(temp, 1);
init_unit = llVecNorm(<init_unit.x, init_unit.y, 0.0>); //line norm vector
float angle = llAtan2(init_unit.y, init_unit.x); //find angle
rotation delta = llEuler2Rot(<0.0, -PI_BY_TWO, PI>); //extra rotation
rotation rot = delta * llEuler2Rot(<0.0, 0.0, angle>); //convert from vector to rotation
init = init + <0.0, -0.45, 0.0>*rot; //make new offset
vector p1 = target - init;
float angle2 = llAtan2(p1.y, p1.x);
rotation rot2 = llEuler2Rot(<0.0, 0.0, angle>);
vector p2 = init + (<0.0, 0.45, 0.0>*(delta*rot2)); //find last other side of semi circle
p2 = (p2 + p1) - init;
float angle3 = llAcos((p1*p2)/(llVecMag(p1)*llVecMag(p2))); //find angle between vectors
llSetRot(delta * llEuler2Rot(<0.0, 0.0, angle2+angle3>)); //set rotation
llSetPos(init); //set postion
}
}
The semicircle is 1m on y axis and middle on each end is +/- <0,0.45,0>.
Please ask if anything is unclear.
I'm being quite dim - I can't figure out what should probably be a fairly trivial trig problem.
Given cartesian coordinates (x, y, z), I would like to determine a new coordinate given a direction (x, y and z angles) and a distance to travel.
class Cartesian() {
int x = 0;
int y = 0;
int z = 0;
int move (int distance, int x_angle, int y_angle, int z_angle) {
x += distance * //some trig here
y += distance * //some trig here
z += distance * //some trig here
}
}
Ie, I want to move a given distance from the origin in a given direction, and need the coordinates of the new position.
This is actually for a JavaScript application, but I just need a bit of psuedocode to help me out.
Thanks
The way you've stated the problem, it seems that "direction cosines" make the most sense.
Assuming x_angle is the angle in radians between the target direction and the X
axis, etc.:
dc_x = cos(x_angle);
dc_y = cos(y_angle);
dc_z = cos(z_angle);
delta_x = dc_x * distance;
delta_y = dc_y * distance;
delta_z = dc_z * distance;
x += delta_x;
y += delta_y;
z += delta_z;