Qt Quick3D: draw a 3D line between 2 points - qt

I am using the new QtQuick3D from Qt6. I need to draw a line between 2 points, but didn't find a function dedicated for this. Therefore I decided to use a basic cylinder that I can scale and rotate. The scaling works as intended, but the rotation has some issues.
Node {
property vector3d center
property vector3d scaleCylinder
property vector3d eulerAngles
Model {
id: line
position: center
source: "#Cylinder"
scale: scaleCylinder
eulerRotation: eulerAngles
materials:
DefaultMaterial {
diffuseColor: "blue"
}
}
}
I compute the rotation using the Eigen library, via an angle axis to retrieve the Euler angles. The cylinder axis is in the Y-axis when displaying it.
AngleAxis angleAxis(const PointType vec) {
double angle;
VectorType axis;
PointType a = PointType({0, 1, 0});
PointType v = normalize(vec);
axis = point2vector(normalize(cross(a, v)));
angle = acos(dot(a, v));
return AngleAxis(angle, axis);
}
void setEulerAngles() {
AngleAxis rotation = angleAxis(target - entry);
VectorType angles= rotation.toRotationMatrix().eulerAngles(0, 1, 2);
eulerAngles = QVector3D(float(angles[0] * 180.0 / M_PI), float(angles[1] * 180.0 / M_PI), float(angles[2] * 180.0 / M_PI));
}
Did I make a mistake when computing my Euler angles? Should I use another technique, like quaternions? Maybe there is even a simpler solution I don't know about.

Related

3D: avoid pinching at poles when creating sphere from polar coordinates

I'm using Wikipedia's spherical coordinate system article to create a sphere made out of particles in Three.js. Based on this article, I created a small Polarizer class that takes in polar coordinates with setPolar(rho, theta, phi) and it returns its corresponding x, y, z
Here's the setPolar() function:
// Rho: radius
// theta θ: polar angle on Y axis
// phi φ: azimuthal angle on Z axis
Polarizer.prototype.setPolar = function(rho, theta, phi){
// Limit values to zero
this.rho = Math.max(0, rho);
this.theta = Math.max(0, theta);
this.phi = Math.max(0, phi);
// Calculate x,y,z
this.x = this.rho * Math.sin(this.theta) * Math.sin(this.phi);
this.y = this.rho * Math.cos(this.theta);
this.z = this.rho * Math.sin(this.theta) * Math.cos(this.phi);
return this;
}
I'm using it to position my particles as follows:
var tempPolarizer = new Polarizer();
for(var i = 0; i < geometry.vertices.length; i++){
tempPolarizer.setPolar(
50, // Radius of 50
Math.random() * Math.PI, // Theta ranges from 0 - PI
Math.random() * 2 * Math.PI // Phi ranges from 0 - 2PI
);
// Set new vertex positions
geometry.vertices[i].set(
tempPolarizer.x,
tempPolarizer.y,
tempPolarizer.z
);
}
It works wonderfully, except that I'm getting high particle densities, or "pinching" at the poles:
I'm stumped as to how to avoid this from happening. I thought of passing a weighted random number to the latitude, but I'm hoping to animate the particles without the longitude also slowing down and bunching up at the poles.
Is there a different formula to generate a sphere where the poles don't get as much weight? Should I be using quaternions instead?
For random uniform sampling
use random point in unit cube , handle it as vector and set its length to radius of your sphere. For example something like this in C++:
x = 2.0*Random()-1.0;
y = 2.0*Random()-1.0;
z = 2.0*Random()-1.0;
m=r/sqrt(x*x+y*y+z*z);
x*=m;
y*=m;
z*=m;
where Random return number in <0.0,1.0>. For more info see:
Procedural generation of stars with skybox
For uniform non-random sampling
see related QAs:
Sphere triangulation by mesh subdivision
Make a sphere with equidistant vertices
In order to avoid high density at the poles, I had to lower the likelihood of theta (latitude) landing close to 0 and PI. My input of
Math.random() * Math.PI, for theta gives an equal likelihood to all values (orange).
Math.acos((Math.random() * 2) - 1) perfectly weights the output to make 0 and PI less likely along the sphere's surface (yellow)
Now I can't even tell where the poles are!

Piechart on a Hexagon

I wanna to produce a Pie Chart on a Hexagon. There are probably several solutions for this. In the picture are my Hexagon and two Ideas:
My Hexagon (6 vertices, 4 faces)
How it should look at the end (without the gray lines)
Math: Can I get some informations from the object to dynamically calculate new vertices (from the center to each point) to add colored faces?
Clipping: On a sphere a Pie-Chart is easy, maybe I can clip the THREE Object (WITHOUT SVG.js!) so I just see the Hexagon with the clipped Chart?
Well the whole clipping thing in three.js is already solved here : Object Overflow Clipping Three JS, with a fiddle that shows it works and all.
So I'll go for the "vertices" option, or rather, a function that, given a list of values gives back a list of polygons, one for each value, that are portions of the hexagon, such that
they all have the centre point as a vertex
the angle they have at that point is proportional to the value
they form a partition the hexagon
Let us suppose the hexagon is inscribed in a circle of radius R, and defined by the vertices :
{(R sqrt(3)/2, R/2), (0,R), (-R sqrt(3)/2, R/2), (-R sqrt(3)/2, -R/2), (0,-R), (R sqrt(3)/2, -R/2)}
This comes easily from the values cos(Pi/6), sin(Pi/6) and various symmetries.
Getting the angles at the centre for each polygon is pretty simple, since it is the same as for a circle. Now we need to know the position of the points that are on the hexagon.
Note that if you use the symmetries of the coordinate axes, there are only two cases : [0,Pi/6] and [Pi/6,Pi/2], and you then get your result by mirroring. If you use the rotational symmetry by Pi/3, you only have one case : [-Pi/6,Pi/6], and you get the result by rotation.
Using rotational symmetry
Thus for every point, you can consider it's angle to be between [-Pi/6,Pi/6]. Any point on the hexagon in that part has x=R sqrt(3)/2, which simplifies the problem a lot : we only have to find it's y value.
Now we assumed that we know the polar coordinate angle for our point, since it is the same as for a circle. Let us call it beta, and alpha its value in [-Pi/6,Pi/6] (modulo Pi/3). We don't know at what distance d it is from the centre, and thus we have the following system :
Which is trivially solved since cos is never 0 in the range [-Pi/6,Pi/6].
Thus d=R sqrt(3)/( 2 cos(alpha) ), and y=d sin(alpha)
So now we know
the angle from the centre beta
it's distance d from the centre, thanks to rotational symmetry
So our point is (d cos(beta), d sin(beta))
Code
Yeah, I got curious, so I ended up coding it. Sorry if you wanted to play with it yourself. It's working, and pretty ugly in the end (at least with this dataset), see the jsfiddle : http://jsfiddle.net/vb7on8vo/5/
var R = 100;
var hexagon = [{x:R*Math.sqrt(3)/2, y:R/2}, {x:0, y:R}, {x:-R*Math.sqrt(3)/2, y:R/2}, {x:-R*Math.sqrt(3)/2, y:-R/2}, {x:0, y:-R}, {x:R*Math.sqrt(3)/2, y:-R/2}];
var hex_angles = [Math.PI / 6, Math.PI / 2, 5*Math.PI / 6, 7*Math.PI / 6, 3*Math.PI / 2, 11*Math.PI / 6];
function regions(values)
{
var i, total = 0, regions = [];
for(i=0; i<values.length; i++)
total += values[i];
// first (0 rad) and last (2Pi rad) points are always at x=R Math.sqrt(3)/2, y=0
var prev_point = {x:hexagon[0].x, y:0}, last_angle = 0;
for(i=0; i<values.length; i++)
{
var j, theta, p = [{x:0,y:0}, prev_point], beta = last_angle + values[i] * 2 * Math.PI / total;
for( j=0; j<hexagon.length; j++)
{
theta = hex_angles[j];
if( theta <= last_angle )
continue;
else if( theta >= beta )
break;
else
p.push( hexagon[j] );
}
var alpha = beta - (Math.PI * (j % 6) / 3); // segment 6 is segment 0
var d = hexagon[0].x / Math.cos(alpha);
var point = {x:d*Math.cos(beta), y:d*Math.sin(beta)};
p.push( point );
regions.push(p.slice(0));
last_angle = beta;
prev_point = {x:point.x, y:point.y};
}
return regions;
}

Drawing a rotating sphere by using a pixel shader in Direct3D

I would like to draw a textured circle in Direct3D which looks like a real 3D sphere. For this purpose, I took a texture of a billard ball and tried to write a pixel shader in HLSL, which maps it onto a simple pre-transformed quad in such a way that it looks like a 3-dimensional sphere (apart from the lighting, of course).
This is what I've got so far:
struct PS_INPUT
{
float2 Texture : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
sampler2D Tex0;
// main function
PS_OUTPUT ps_main( PS_INPUT In )
{
// default color for points outside the sphere (alpha=0, i.e. invisible)
PS_OUTPUT Out;
Out.Color = float4(0, 0, 0, 0);
float pi = acos(-1);
// map texel coordinates to [-1, 1]
float x = 2.0 * (In.Texture.x - 0.5);
float y = 2.0 * (In.Texture.y - 0.5);
float r = sqrt(x * x + y * y);
// if the texel is not inside the sphere
if(r > 1.0f)
return Out;
// 3D position on the front half of the sphere
float p[3] = {x, y, sqrt(1 - x*x + y*y)};
// calculate UV mapping
float u = 0.5 + atan2(p[2], p[0]) / (2.0*pi);
float v = 0.5 - asin(p[1]) / pi;
// do some simple antialiasing
float alpha = saturate((1-r) * 32); // scale by half quad width
Out.Color = tex2D(Tex0, float2(u, v));
Out.Color.a = alpha;
return Out;
}
The texture coordinates of my quad range from 0 to 1, so I first map them to [-1, 1]. After that I followed the formula in this article to calculate the correct texture coordinates for the current point.
At first, the outcome looked ok, but I'd like to be able to rotate this illusion of a sphere arbitrarily. So I gradually increased u in the hope of rotating the sphere around the vertical axis. This is the result:
As you can see, the imprint of the ball looks unnaturally deformed when it reaches the edge. Can anyone see any reason for this? And additionally, how could I implement rotations around an arbitrary axis?
Thanks in advance!
I finally found the mistake by myself: The calculation of the z value which corresponds to the current point (x, y) on the front half of the sphere was wrong. It must of course be:
That's all, it works as exspected now. Furthermore, I figured out how to rotate the sphere. You just have to rotate the point p before calculating u and v by multiplying it with a 3D rotation matrix like this one for example.
The result looks like the following:
If anyone has any advice as to how I could smooth the texture a litte bit, please leave a comment.

Bounding Boxes for Circle and Arcs in 3D

Given curves of type Circle and Circular-Arc in 3D space, what is a good way to compute accurate bounding boxes (world axis aligned)?
Edit: found solution for circles, still need help with Arcs.
C# snippet for solving BoundingBoxes for Circles:
public static BoundingBox CircleBBox(Circle circle)
{
Point3d O = circle.Center;
Vector3d N = circle.Normal;
double ax = Angle(N, new Vector3d(1,0,0));
double ay = Angle(N, new Vector3d(0,1,0));
double az = Angle(N, new Vector3d(0,0,1));
Vector3d R = new Vector3d(Math.Sin(ax), Math.Sin(ay), Math.Sin(az));
R *= circle.Radius;
return new BoundingBox(O - R, O + R);
}
private static double Angle(Vector3d A, Vector3d B)
{
double dP = A * B;
if (dP <= -1.0) { return Math.PI; }
if (dP >= +1.0) { return 0.0; }
return Math.Acos(dP);
}
One thing that's not specified is how you convert that angle range to points in space. So we'll start there and assume that the angle 0 maps to O + r***X** and angle π/2 maps to O + r***Y**, where O is the center of the circle and
X = (x1,x2,x3)
and
Y = (y1,y2,y3)
are unit vectors.
So the circle is swept out by the function
P(θ) = O + rcos(θ)X + rsin(θ)Y
where θ is in the closed interval [θstart,θend].
The derivative of P is
P'(θ) = -rsin(θ)X + rcos(θ)Y
For the purpose of computing a bounding box we're interested in the points where one of the coordinates reaches an extremal value, hence points where one of the coordinates of P' is zero.
Setting -rsin(θ)xi + rcos(θ)yi = 0 we get
tan(θ) = sin(θ)/cos(θ) = yi/xi.
So we're looking for θ where θ = arctan(yi/xi) for i in {1,2,3}.
You have to watch out for the details of the range of arctan(), and avoiding divide-by-zero, and that if θ is a solution then so is θ±k*π, and I'll leave those details to you.
All you have to do is find the set of θ corresponding to extremal values in your angle range, and compute the bounding box of their corresponding points on the circle, and you're done. It's possible that there are no extremal values in the angle range, in which case you compute the bounding box of the points corresponding to θstart and θend. In fact you may as well initialize your solution set of θ's with those two values, so you don't have to special case it.

Implementing Ray Picking

I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;

Resources