Drawing objects in aligned to face normal in OpenGL - math

I am trying to draw normal handle (a tall rectangle shape up to now) on geometry faces, in the center of face and pointing along normal.
I am doing it with the code below:
Vec3 up(0.0, 1.0, 0.0);
Vec3 angle_to_rotate = up.cross(face_normal);
double dot = up.dot(normal);
float rotate_angle = std::acos(dot);
Mat4 matrix;
matrix.translate(face_center.x, face_center.y, face_center.z);
matrix.rotate(angle_to_rotate, rotate_angle);
glMultMatrixd(matrix.copyGL());
then I am drawing a tall cube in world space (y up).
This works ok sometimes, but sometimes (45 angles in two dimension) it fails, or the angle direction is correct, but the rectangle looks like rotated on z-axis and is not aligned right.
Is this correct way of achieving drawing objects aligned with (maybe in future perpendicular to too) normal in OpenGL or is there a better way?

The cross-product of two normalized vectors is not normalized itself. The length of the vector is actually the sinus of the rotation angle.
You could use that to get a more accurate value for the angle:
Vec3 axis = up.cross(face_normal);
float angle = std::atan2(axis.length(), up.dot(face_normal));
axis = axis.normalize();

Related

Normalized Device Coordinate Metal coming from OpenGL

Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.
So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):
float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);
where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.
And the final position that I return from the vertex shader for the rasterization state is:
gl_Position = vec4(ndcX, ndcY, Depth, 1.0);
Which which works perfectly fine in Opengl ES.
Now the problem is that when I try it just like this in Metal 2, it doesn't work.
I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.
I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.
So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?
It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.
Here are the formulae I might use to do this:
float xScale = 2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias = 1.0f;
float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;
Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.

Converting XYZ to XY (world coords to screen coords)

Is there a way to convert that data:
Object position which is a 3D point (X, Y, Z),
Camera position which is a 3D point (X, Y, Z),
Camera yaw, pitch, roll (-180:180, -90:90, 0)
Field of view (-45°:45°)
Screen width & height
into the 2D point on the screen (X, Y)?
I'm looking for proper math calculations according to this exact set of data.
It's difficult, but it's possible to do it for yourself.
There are lots of libraries that do this for you, but it is more satisfying if you do it yourself:
This problem is possible and I have written my own 3D engine to do this for objects in javascript using the HTML5 Canvas. You can see my code here and solve a 3D maze game I wrote here to try and understand what I will talk about below...
The basic idea is to work in steps. To start, you have to forget about camera angle (yaw, pitch and roll) as these come later and just imagine you are looking down the y axis. Then the basic idea is to calculate, using trig, the pitch angle and yaw to your object coordinate. By this I mean imagining that you are looking through a letterbox, the yaw angle would be the angle in degrees left and right to your coordinate (so both positive and negative) from the center/ mid line and the yaw up and down from it. Taking these angles, you can map them to the x and y 2D coordinate system.
The calculations for the angles are:
pitch = atan((coord.x - cam.x) / (coord.y - cam.y))
yaw = atan((coord.z - cam.z) / (coord.y - cam.y))
with coord.x, coord.y and coord.z being the coordinates of the object and the same for the cam (cam.x, cam.y and cam.z). These calculations also assume that you are using a Cartesian coordinate system with the different axis being: z up, y forward and x right.
From here, the next step is to map this angle in the 3D world to a coordinate which you can use in a 2D graphical representation.
To map these angles into your screen, you need to scale them up as distances from the mid line. This means multiplying them by your screen width / fov. Finally, these distances will now be positive or negative (as it is an angle from the mid line) so to actually draw it on a canvas, you need to add it to half of the screen width.
So this would mean your canvas coordinate would be:
x = width / 2 + (pitch * (width / fov)
y = height / 2 + (yaw * (height / fov)
where width and height are the dimensions of you screen, fov is the camera's fov and yaw and pitch are the respective angles of the object from the camera.
You have now achieved the first big step which is mapping a 3D coordinate down to 2D. If you have managed to get this all working, I would suggest trying multiple points and connecting them to form shapes. Also try moving your cameras position to see how the perspective changes as you will soon see how realistic it already looks.
In addition, if this worked fine for you, you can move on to having the camera be able to not only change its position in the 3D world but also change its perspective as in yaw, pitch and roll angles. I will not go into this entirely now, but the basic idea is to use 3D world transformation matrices. You can read up about them here but they do get quite complicated, however I can give you the calculations if you get this far.
It might help to read (old style) OpenGL specs:
https://www.khronos.org/registry/OpenGL/specs/gl/glspec14.pdf
See section 2.10
Also:
https://www.khronos.org/opengl/wiki/Vertex_Transformation
Might help with more concrete examples.
Also, for "proper math" look up 4x4 matrices, projections, and homogeneous coordinates.
https://en.wikipedia.org/wiki/Homogeneous_coordinates

How are angles on QPainterPath::arcTo interpreted?

I'm working on feature of a graphic editor where I'm editing arcs, and QPainterPath::arcTo is not behaving as I expected when the shape is an ellipse; it works as expected when it's a circle.
The two images below show the results. In the first case, I've created a circle, which I then convert to an arc with an initial start angle of 45 and span angle of 270. The scene coordinate space is square. The diagonal lines are at 45 degrees. As expected, the circular arc's end points are exactly on the diagonal lines.
In the second case, I have an ellipse, which is converted to an arc in exactly the same way with 45 and 270 degree angles respectively. The end points of the arc no longer fall on the diagonal lines, which is not what I expect.
In both cases, the drawing code is:
painter.arcTo (rect, 45, 270);
Zero degrees is at the 3 o'clock position, and I had believed that the specified angle was measured between that and a line from the center point to the point on the arc edge. Clearly, something else is going on that I don't understand and doesn't appear to be documented in the QPainter::arcTo description.
This is an issue because I'm writing code to reshape the arc, and I need to be able to work backgrounds when all I have is the current mouse position and the center point of the encompassing rectangle. Right now, as I reshape the arc, the angle that I'm calculating is only accurate at 0, 90, 180, and 270. The closer I get to the intervening 45 degree angles, the further off my angle is.
I'm getting that angle by:
QLineF (rect.center(), mouse_pos).angle ();
Again, for circles, this works perfectly. For non-circular ellipses, it doesn't.
After writing this up, I found this beautiful illustration, which exactly demonstrates what I'm dealing with. Unfortunately, the Postscript solution isn't helpful for me. I need to know what to do to calculate the correct angles.
I've found my answer here. As I expected, my understanding of the angles was incorrect. To perform my mouse tracking to reshape the arc, I need to find the intersection of a line segment with an ellipse and work backward from the parametric ellipse equations to find the correct angle.
Thanks to #goug I was able to fix a similar issue with QPainterPath::arcTo.
In my case I need an elliptical arc that behaves like a normal arc. Where start and end angles are controlled by a user. The start angle doesn't require any corrections, but the end angle does.
Here is a code that shows how to bypass this issue.
qreal startAngle = 10;
qreal endAngle = 60;
qreal radius1 = 30; // X-axis
qreal radius2 = 60; // Y-axis
QPointF center;
QRectF boundingRect(center.x() - radius1, center.y() - radius2, radius1*2, radius2*2);
if (!qFuzzyIsNull(endAngle) &&
!VFuzzyComparePossibleNulls(endAngle, 90) &&
!VFuzzyComparePossibleNulls(endAngle, 180) &&
!VFuzzyComparePossibleNulls(endAngle, 270) &&
!VFuzzyComparePossibleNulls(endAngle, 360))
{
// Calculating correct end angle
qreal endAngleRad = qDegreesToRadians(endAngle);
endAngle = qRadiansToDegrees(qAtan2(radius1 * qSin(endAngleRad),
radius2 * qCos(endAngleRad)));
}
QLineF startLine(center.x(), center.y(), center.x() + radius1, center.y());
QLineF endLine = startLine;
startLine.setAngle(startAngle);
endLine.setAngle(endAngle);
qreal sweepAngle = startLine.angleTo(endLine);
QPainterPath myPath;
myPath.arcTo(boundingRect, startAngle, sweepLength);

Libgdx - Keeping an object at certain distance and direction from other object

So let's say I have 2 objects. One with the sprite of a circle, other with the sprite of triangle.
My triangle object is set to the position of mouse in every step of the game, while circle is either standing in place or just moving in its own way, whatever.
What I want to do is to have the TRIANGLE move around the circle, but not on it's own, rather on the way your cursor is positioned.
So basically, calculate degree between circle's center and triangle's center. Whenever they are far from each other I just set triangle position to mouse position, BUT when you hover your mouse too close (past some X distance) you can't get any closer (the TRIANGLE is then positioned at maximum that X distance in the direction from circle center to mouse point)
I'll add a picture and hopefully you can get what I mean.
https://dl.dropboxusercontent.com/u/23334107/help2.png
Steps:
1. Calculate the distance between the cursor and the center of the circle. If it is more than the 'limit' then set the triangle's position to the cursor's position and skip to step 4.
2. Obtain the angle formed between the center of the circle and the cursor.
3. Calculate the new Cartesian coordinates (x, y) of the triangle based of off the polar coordinates we have now (angle and radius). The radius will be set to the limit of the circle before we calculate x and y, because we want the triangle to be prevented from entering this limit.
4. Rotate the image of the triangle to 1.5708-angle where angle was found in step 2. (1.5708, or pi/2, in radians is equivalent to 90°)
Details:
1. The distance between two points (x1, y1) and (x2, y2) is sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2))
2. The angle (in radians) can be calculated with
double angle = Math.atan2(circleY-cursorY, cursorX-circleX);
The seemingly mistaken difference in order of circleY-cursorY and cursorX-circleX is an artefact of the coordinate system in most programming languages. (The y coordinate increases downwards instead of upwards, as it does in mathematics, while the x coordinate increases in concord with math - to the right.)
3. To convert polar coordinates to Cartesian coordinates use these conversions:
triangle.setX( cos(angle)*limit );
triangle.setY( sin(angle)*limit );
where limit is the distance you want the triangle to remain from the circle.
4. In order to get your triangle to 'face' the circle (as you illustrated), you have to rotate it using the libgdx Sprite function setRotation. It will rotate around the point set with setOrigin.
Now, you have to rotate by 1.5708-angle – this is because of further differences between angles in mathematics and angles in programming! The atan2 function returns the angle as measured mathematically, with 0° at three o'clock and increasing counterclockwise. The setRotation function (as far as I can tell) has 0° at twelve o'clock and increases clockwise. Also, we have to convert from radians to degrees. In short, this should work, but I haven't tested it:
triangle.setRotation(Math.toDegrees(1.4708-angle));
Hope this helps!

How do I calculate pixel shader depth to render a circle drawn on a point sprite as a sphere that will intersect with other objects?

I am writing a shader to render spheres on point sprites, by drawing shaded circles, and need to write a depth component as well as colour in order that spheres near each other will intersect correctly.
I am using code similar to that written by Johna Holwerda:
void PS_ShowDepth(VS_OUTPUT input, out float4 color: COLOR0,out float depth : DEPTH)
{
float dist = length (input.uv - float2 (0.5f, 0.5f)); //get the distance form the center of the point-sprite
float alpha = saturate(sign (0.5f - dist));
sphereDepth = cos (dist * 3.14159) * sphereThickness * particleSize; //calculate how thick the sphere should be; sphereThickness is a variable.
depth = saturate (sphereDepth + input.color.w); //input.color.w represents the depth value of the pixel on the point-sprite
color = float4 (depth.xxx ,alpha ); //or anything else you might need in future passes
}
The video at that link gives a good idea of the effect I'm after: those spheres drawn on point sprites intersect correctly. I've added images below to illustrate too.
I can calculate the depth of the point sprite itself fine. However, I am not sure show to calculate the thickness of the sphere at a pixel in order to add it to the sprite's depth, to give a final depth value. (The above code uses a variable rather than calculating it.)
I've been working on this on and off for several weeks but haven't figured it out - I'm sure it's simple, but it's something my brain hasn't twigged.
Direct3D 9's point sprite sizes are calculated in pixels, and my sprites have several sizes - both by falloff due to distance (I implemented the same algorithm the old fixed-function pipeline used for point size computations in my vertex shader) and also due to what the sprite represents.
How do I go from the data I have in a pixel shader (sprite location, sprite depth, original world-space radius, radius in pixels onscreen, normalised distance of the pixel in question from the centre of the sprite) to a depth value? A partial solution simply of sprite size to sphere thickness in depth coordinates would be fine - that can be scaled by the normalised distance from the centre to get the thickness of the sphere at a pixel.
I am using Direct3D 9 and HLSL with shader model 3 as the upper SM limit.
In pictures
To demonstrate the technique, and the point at which I'm having trouble:
Start with two point sprites, and in the pixel shader draw a circle on each, using clip to remove fragments outside the circle's boundary:
One will render above the other, since after all they are flat surfaces.
Now, make the shader more advanced, and draw the circle as though it was a sphere, with lighting. Note that even though the flat sprites look 3D, they still draw with one fully in front of the other since it's an illusion: they are still flat.
(The above is easy; it's the final step I am having trouble with and am asking how to achieve.)
Now, instead of the pixel shader writing only colour values, it should write the depth as well:
void SpherePS (...any parameters...
out float4 oBackBuffer : COLOR0,
out float oDepth : DEPTH0 <- now also writing depth
)
{
Note that now the spheres intersect when the distance between them is smaller than their radiuses:
How do I calculate the correct depth value in order to achieve this final step?
Edit / Notes
Several people have commented that a real sphere will distort due to perspective, which may be especially visible at the edges of the screen, and so I should use a different technique. First, thanks for pointing that out, it's not necessarily obvious and is good for future readers! Second, my aim is not to render a perspective-correct sphere, but to render millions of data points fast, and visually I think a sphere-like object looks nicer than a flat sprite, and shows the spatial position better too. Slight distortion or lack of distortion does not matter. If you watch the demo video, you can see how it is a useful visual tool. I don't want to render actual sphere meshes because of the large number of triangles compared to a simple hardware-generated point sprite. I really do want to use the technique of point sprites, and I simply want to extend the extant demo technique in order to calculate the correct depth value, which in the demo was passed in as a variable with no source for how it was derived.
I came up with a solution yesterday, which which works well and and produces the desired result of a sphere drawn on the sprite, with a correct depth value which intersects with other objects and spheres in the scene. It may be less efficient than it needs to be (it calculates and projects two vertices per sprite, for example) and is probably not fully correct mathematically (it takes shortcuts), but it produces visually good results.
The technique
In order to write out the depth of the 'sphere', you need to calculate the radius of the sphere in depth coordinates - i.e., how thick half the sphere is. This amount can then be scaled as you write out each pixel on the sphere by how far from the centre of the sphere you are.
To calculate the radius in depth coordinates:
Vertex shader: in unprojected scene coordinates cast a ray from the eye through the sphere centre (that is, the vertex that represents the point sprite) and add the radius of the sphere. This gives you a point lying on the surface of the sphere. Project both the sprite vertex and your new sphere surface vertex, and calculate depth (z/w) for each. The different is the depth value you need.
Pixel Shader: to draw a circle you already calculate a normalised distance from the centre of the sprite, using clip to not draw pixels outside the circle. Since it's normalised (0-1), multiply this by the sphere depth (which is the depth value of the radius, i.e. the pixel at the centre of the sphere) and add to the depth of the flat sprite itself. This gives a depth thickest at the sphere centre to 0 and the edge, following the surface of the sphere. (Depending on how accurate you need it, use a cosine to get a curved thickness. I found linear gave perfectly fine-looking results.)
Code
This is not full code since my effects are for my company, but the code here is rewritten from my actual effect file omitting unnecessary / proprietary stuff, and should be complete enough to demonstrate the technique.
Vertex shader
void SphereVS(float4 vPos // Input vertex,
float fPointRadius, // Radius of circle / sphere in world coords
out float fDXScale, // Result of DirectX algorithm to scale the sprite size
out float fDepth, // Flat sprite depth
out float4 oPos : POSITION0, // Projected sprite position
out float fDiameter : PSIZE, // Sprite size in pixels (DX point sprites are sized in px)
out float fSphereRadiusDepth : TEXCOORDn // Radius of the sphere in depth coords
{
...
// Normal projection
oPos = mul(vPos, g_mWorldViewProj);
// DX depth (of the flat billboarded point sprite)
fDepth = oPos.z / oPos.w;
// Also scale the sprite size - DX specifies a point sprite's size in pixels.
// One (old) algorithm is in http://msdn.microsoft.com/en-us/library/windows/desktop/bb147281(v=vs.85).aspx
fDXScale = ...;
fDiameter = fDXScale * fPointRadius;
// Finally, the key: what's the depth coord to use for the thickness of the sphere?
fSphereRadiusDepth = CalculateSphereDepth(vPos, fPointRadius, fDepth, fDXScale);
...
}
All standard stuff, but I include it to show how it's used.
The key method and the answer to the question is:
float CalculateSphereDepth(float4 vPos, float fPointRadius, float fSphereCenterDepth, float fDXScale) {
// Calculate sphere depth. Do this by calculating a point on the
// far side of the sphere, ie cast a ray from the eye, through the
// point sprite vertex (the sphere center) and extend it by the radius
// of the sphere
// The difference in depths between the sphere center and the sphere
// edge is then used to write out sphere 'depth' on the sprite.
float4 vRayDir = vPos - g_vecEyePos;
float fLength = length(vRayDir);
vRayDir = normalize(vRayDir);
fLength = fLength + vPointRadius; // Distance from eye through sphere center to edge of sphere
float4 oSphereEdgePos = g_vecEyePos + (fLength * vRayDir); // Point on the edge of the sphere
oSphereEdgePos.w = 1.0;
oSphereEdgePos = mul(oSphereEdgePos, g_mWorldViewProj); // Project it
// DX depth calculation of the projected sphere-edge point
const float fSphereEdgeDepth = oSphereEdgePos.z / oSphereEdgePos.w;
float fSphereRadiusDepth = fSphereCenterDepth - fSphereEdgeDepth; // Difference between center and edge of sphere
fSphereRadiusDepth *= fDXScale; // Account for sphere scaling
return fSphereRadiusDepth;
}
Pixel shader
void SpherePS(
...
float fSpriteDepth : TEXCOORD0,
float fSphereRadiusDepth : TEXCOORD1,
out float4 oFragment : COLOR0,
out float fSphereDepth : DEPTH0
)
{
float fCircleDist = ...; // See example code in the question
// 0-1 value from the center of the sprite, use clip to form the sprite into a circle
clip(fCircleDist);
fSphereDepth = fSpriteDepth + (fCircleDist * fSphereRadiusDepth);
// And calculate a pixel color
oFragment = ...; // Add lighting etc here
}
This code omits lighting etc. To calculate how far the pixel is from the centre of the sprite (to get fCircleDist) see the example code in the question (calculates 'float dist = ...') which already drew a circle.
The end result is...
Result
Voila, point sprites drawing spheres.
Notes
The scaling algorithm for the sprites may require the depth to be
scaled, too. I am not sure that line is correct.
It is not fully mathematically correct (takes shortcuts)
but as you can see the result is visually correct
When using millions of sprites, I still get a good rendering speed (<10ms per frame for 3 million sprites, on a VMWare Fusion emulated Direct3D device)
The first big mistake is that a real 3d sphere will not project to a circle under perspective 3d projection.
This is very non intuitive, but look at some pictures, especially with a large field of view and off center spheres.
Second, I would recommend against using point sprites in the beginning, it might make things harder than necessary, especially considering the first point. Just draw a generous bounding quad around your sphere and go from there.
In your shader you should have the screen space position as an input. From that, the view transform, and your projection matrix you can get to a line in eye space. You need to intersect this line with the sphere in eye space (raytracing), get the eye space intersection point, and transform that back to screen space. Then output 1/w as depth. I am not doing the math for you here because I am a bit drunk and lazy and I don't think that's what you really want to do anyway. It's a great exercise in linear algebra though, so maybe you should try it. :)
The effect you are probably trying to do is called Depth Sprites and is usually used only with an orthographic projection and with the depth of a sprite stored in a texture. Just store the depth along with your color for example in the alpha channel and just output
eye.z+(storeddepth-.5)*depthofsprite.
Sphere will not project into a circle in general case. Here is the solution.
This technique is called spherical billboards. An in-depth description can be found in this paper:
Spherical Billboards and their Application to Rendering Explosions
You draw point sprites as quads and then sample a depth texture in order to find the distance between per-pixel Z-value and your current Z-coordinate. The distance between the sampled Z-value and current Z affects the opacity of the pixel to make it look like a sphere while intersecting underlying geometry. Authors of the paper suggest the following code to compute opacity:
float Opacity(float3 P, float3 Q, float r, float2 scr)
{
float alpha = 0;
float d = length(P.xy - Q.xy);
if(d < r) {
float w = sqrt(r*r - d*d);
float F = P.z - w;
float B = P.z + w;
float Zs = tex2D(Depth, scr);
float ds = min(Zs, B) - max(f, F);
alpha = 1 - exp(-tau * (1-d/r) * ds);
}
return alpha;
}
This will prevent sharp intersections of your billboards with the scene geometry.
In case point-sprites pipeline is difficult to control (i can say only about OpenGL and not DirectX) it is better to use GPU-accelerated billboarding: you supply 4 equal 3D vertices that match the center of the particle. Then you move them into the appropriate billboard corners in a vertex shader, i.e:
if ( idx == 0 ) ParticlePos += (-X - Y);
if ( idx == 1 ) ParticlePos += (+X - Y);
if ( idx == 2 ) ParticlePos += (+X + Y);
if ( idx == 3 ) ParticlePos += (-X + Y);
This is more oriented to the modern GPU pipeline and of coarse will work with any nondegenerate perspective projection.

Resources