I use the following code to get equirectangular texture coordinates based on an object's position in the world:
Equirec(positionX, positionY, positionZ) {
radius = sqrt(positionX^2 + positionZ^2);
a = atan2(-positionX, positionZ);
b = atan2(positionY, radius);
uv.x = (a - pi) / -2pi;
uv.y = (b + pi/2) / pi;
return uv;
}
Is it possible to invert this function?
What I want to do is, given the uv coordinates returned from this function, figure out the corresponding position in the world.
Not possible. You have thrown away information. The UV coordinates are within a single plane. If you knew the location of that plane in 3D space, you could do it.
Related
I am able to get the current position the camera is in, i.e, its x,y,z co-ordinates in aframe.
In the below code, I am making my camera move forward.
function move_camera_forward(){
x=$("#cam").attr("position").x;
y=$("#cam").attr("position").y;
z=$("#cam").attr("position").z;
updated_pos=x+" "+y+" "+String(Number(z)-0.2);
$("#cam").attr("position",updated_pos);
}
But this moves the camera along z axis irrespective the direction the camera is facing. I want to move the camera based on the direction faced by the camera. If the camera is facing lets say 45 degrees, I want to update the three co-ordinates. For this I need to find out in which direction the camera is facing. How can I do this? Does it have something to do with fov?
I finally figured out how to do this. camera has a rotation attribute which gives me the angle of rotation. With this data and a bit of trigonometry, we can find the updated position. The below code moves the camera in the direction in which the user sees.
new_x = 0;
new_z = 0;
function move_camera_forward() {
x = $("#cam").attr("position").x;
y = $("#cam").attr("position").y;
z = $("#cam").attr("position").z;
radian = -($("#cam").attr("rotation").y) * (Math.PI / 180);
new_z = (new_z + (0.1 * Math.cos(radian)));
new_x = new_x + (0.1 * Math.sin(radian));
new_pos = new_x + " " + y + " " + (-new_z);
console.log(new_pos)
$("#cam").attr("position", new_pos)
}
You can dive into the Three.js API to get any additional info that Aframe doesn't necessarily bubble to the surface. So you can get the camera object using
var camera = document.querySelector('[camera]').object3D
and then you have access to all the Vector data for the camera. To get the direction the camera is facing you can use camera.getWorldDirection() and that returns a Vector3 with X,Y and Z values.
i am scratching my head for some time now how to do this.
I have two defined vectors in 3d space. Say vector X at (0,0,0) and vector Y at (3,3,3). I will get a random point on a line between those two vectors. And around this point i want to form a circle ( some amount of points ) perpendicular to the line between the X and Y at given radius.
Hopefuly its clear what i am looking for. I have looked through many similar questions, but just cant figure it out based on those. Thanks for any help.
Edit:
(Couldnt put everything into comment so adding it here)
#WillyWonka
Hi, thanks for your reply, i had some moderate success with implementing your solution, but has some trouble with it. It works most of the time, except for specific scenarios when Y point would be at positions like (20,20,20). If it sits directly on any axis its fine.
But as soon as it gets into diagonal the distance between perpendicular point and origin gets smaller for some reason and at very specific diagonal positions it kinda flips the perpendicular points.
IMAGE
Here is the code for you to look at
public Vector3 X = new Vector3(0,0,0);
public Vector3 Y = new Vector3(0,0,20);
Vector3 A;
Vector3 B;
List<Vector3> points = new List<Vector3>();
void FindPerpendicular(Vector3 x, Vector3 y)
{
Vector3 direction = (x-y);
Vector3 normalized = (x-y).normalized;
float dotProduct1 = Vector3.Dot(normalized, Vector3.left);
float dotProduct2 = Vector3.Dot(normalized, Vector3.forward);
float dotProduct3 = Vector3.Dot(normalized, Vector3.up);
Vector3 dotVector = ((1.0f - Mathf.Abs(dotProduct1)) * Vector3.right) +
((1.0f - Mathf.Abs(dotProduct2)) * Vector3.forward) +
((1.0f - Mathf.Abs(dotProduct3)) * Vector3.up);
A = Vector3.Cross(normalized, dotVector.normalized);
B = Vector3.Cross(A, normalized);
}
What you want to do first is to find the two orthogonal basis vectors of the plane perpendicular to the line XY, passing through the point you choose.
You first need to find a vector which is perpendicular to XY. To do this:
Normalize the vector XY first
Dot XY with the X-axis
If this is very small (for numerical stability let's say < 0.1) then it must be parallel/anti-parallel to the X-axis. We choose the Y axis.
If not then we choose the X-axis
For whichever chosen axis, cross it with XY to get one of the basis vectors; cross this with XY again to get the second vector.
Normalize them (not strictly necessary but very useful)
You now have two basis vectors to calculate your circle coordinates, call them A and B. Call the point you chose P.
Then any point on the circle can be parametrically calculated by
Q(r, t) = P + r * (A * cos(t) + B * sin(t))
where t is an angle (between 0 and 2π), and r is the circle's radius.
So I'm trying to write code that sees if a ray intersects a flat circular disk and I was hoping to get it checked out here. My disk is always centered on the negative z axis so its normal vector should be (0,0, -1).
The way I'm doing it is first calculate the ray-plane intersection and then determining if that intersection point is within the "scope" of the disk.
In my code I am getting some numbers that seem off and I am not sure if the problem is in this method or if it is possibly somewhere else. So if there is something wrong with this code I would appreciate your feedback! =)
Here is my code:
float d = z_intercept; //This is where disk intersects z-axis. Can be + or -.
ray->d = Normalize(ray->d);
Point p(0, 0, d); //This is the center point of the disk
Point p0(0, 1, d);
Point p1(1, 0, d);
Vector n = Normalize(Cross(p0-p, p1-p));//Calculate normal
float diameter = DISK_DIAMETER; //Constant value
float t = (-d-Dot(p-ray->o, n))/Dot(ray->d, n); //Calculate the plane intersection
Point intersection = ray->o + t*ray->d;
return (Distance(p, intersection) <= diameter/2.0f); //See if within disk
//This is my code to calculate distance
float RealisticCamera::Distance(Point p, Point i)
{
return sqrt((p.x-i.x)*(p.x-i.x) + (p.y-i.y)*(p.y-i.y) + (p.z-i.z)*(p.z-i.z));
}
"My disk is always centered on the negative z axis so its normal vector should be (0,0, -1)."
This fact simplifies calculations.
Degenerated case: ray->d.z = 0 -> if ray->o.z = d then ray lies in disk plane, check as 2Dd, else ray is parallel and there is no intersection
Common case: t = (d - ray->o.z) / ray->d.z
If t has positive value, find x and y for this t, and check x^2+y^2 <= disk_radius^2
Calculation of t is wrong.
Points on the ray are:
ray->o + t * ray->d
in particular, coordinate z of a point on the ray is:
ray->o.z() + t * ray->d.z()
Which must be equal to d. That comes out
t = ( d - ray->o.z() ) / ray->d.z()
My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.
I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;