Related
I'm currently working on a game project and need to render a point in front of the current players vision, the game is written in a custom c++ engine. I have the current position (x,y,z) and the current rotation (pitch,yaw,roll). I need to extend the point forward along the known angle at a set distance.
edit:
What I Used As A Solution (Its slightly off but that's ok for me)
Vec3 LocalPos = {0,0,0};
Vec3 CurrentLocalAngle = {0,0,0};
float len = 0.1f;
float pitch = CurrentLocalAngle.x * (M_PI / 180);
float yaw = CurrentLocalAngle.y * (M_PI / 180);
float sp = sinf(pitch);
float cp = cosf(pitch);
float sy = sinf(yaw);
float cy = cosf(yaw);
Vec3 dir = { cp * cy, cp * sy, -sp };
LocalPos = { LocalPos.x + dir.x * len, LocalPos.y + dir.y * len,LocalPos.z + dir.z * len };
You can get the forward vector of the player from matrix column 3 if it is column based, then you multiply its normal by the distance you want then add the result to the player position you will get the point you need.
Convert the angle to a directional vector or just get the "forward vector" from the player if it's available in the engine you're using (it should be the same thing).
Directional vectors are normalized by nature (they have distance = 1), so you can just multiply them by the desired distance to get the desired offset. Multiply this vector by the distance you want the point to be relative to the reference point (the player's camera vector I presume), and then you just add one to the other to get the point in the world where this point belongs.
I need to calculate the angles to through the ball in that direction for a given speed and the point where it should land after thrown.
The horizontal angle is easy(We know both start and step points).How to calculate the vertical angle of projection.There is gravy applying on object.
Time of travel will be usual as bowling time(time between ball release and occurring step) as per video.
Is there a way directly in unity3d?
Watch this video for 8 seconds for clear understating of this question.
According to the Wikipedia page Trajectory of a projectile, the "Angle of reach" (The angle you want to know) is calculated as follows:
θ = 1/2 * arcsin(gd/v²)
In this formula, g is the gravitational constant 9.81, d is the distance you want the projectile to travel, and v is the velocity at which the object is thrown.
Code to calculate this could look something like this:
float ThrowAngle(Vector3 destination, float velocity)
{
const float g = 9.81f;
float distance = Vector3.Distance(transform.position, destination);
//assuming you want degrees, otherwise just drop the Rad2Deg.
return Mathf.Rad2Deg * (0.5f * Asin((g*distance)/Mathf.Pow(velocity, 2f)));
}
This will give you the angle assuming no air resistance etc. exist in your game.
If your destination and your "throwing point" are not at the same height, you may want to set both to y=0 first, otherwise, errors may occur.
EDIT:
Considering that your launch point is higher up than the destination, this formula from the same page should work:
θ = arctan(v² (+/-) √(v^4-g(gx² + 2yv²))/gx)
Here, x is the range, or distance, and y is the altitude (relative to the launch point).
Code:
float ThrowAngle(Vector3 start, Vector3 destination, float v)
{
const float g = 9.81f;
float xzd = Mathf.Sqrt(Mathf.Pow(destination.x - start.x, 2) + Mathf.Pow(destination.z - start.z, 2));
float yd = destination.y - start.y;
//assuming you want degrees, otherwise just drop the Rad2Deg. Split into two lines for better readability.
float sqrt = (Mathf.Pow(v,4) - g * (g*Mathf.Pow(xzd,2) + 2*yd*Mathf.Pow(v,2))/g*xzd);
//you could also implement a solution which uses both values in some way, but I left that out for simplicity.
return Mathf.Atan(Mathf.Pow(v, 2) + sqrt);
}
Time for a little bit of math for the end of the day..
I need to project 4 points of the window size:
<0,0> <1024,768>
Into a world space coordinates so it will form a quadrilateral shape that will later be used for terrain culling - without GluUnproject
For test only, I use mouse coordinates - and try to project them onto the world coords
RESOLVED
Here's how to do it exactly, step by step.
Obtain your mouse coordinates within the client area
Get your Projection matrix and View matrix if no Model matrix required.
Multiply Projection * View
Inverse the results of multiplication
Construct a vector4 consisting of
x = mouseposition.x within a range of window x
transform to values between -1 and 1
y = mouseposition.y within a range of window y
transform to values between -1 and 1
remember to invert mouseposition.y if needed
z = the depth value ( this can be obtained with glReadPixel)
you can manually go from -1 to 1 ( zNear, zFar )
w = 1.0
Multiply the vector by inversed matrix created before
Divide result vector by it's w component after matrix multiplication ( perspective division )
POINT mousePos;
GetCursorPos(&mousePos);
ScreenToClient( this->GetWindowHWND(), &mousePos );
CMatrix4x4 matProjection = m_pCamera->getViewMatrix() * m_pCamera->getProjectionMatrix() ;
CMatrix4x4 matInverse = matProjection.inverse();
float in[4];
float winZ = 1.0;
in[0]=(2.0f*((float)(mousePos.x-0)/(this->GetResolution().x-0)))-1.0f,
in[1]=1.0f-(2.0f*((float)(mousePos.y-0)/(this->GetResolution().y-0)));
in[2]=2.0* winZ -1.0;
in[3]=1.0;
CVector4 vIn = CVector4(in[0],in[1],in[2],in[3]);
pos = vIn * matInverse;
pos.w = 1.0 / pos.w;
pos.x *= pos.w;
pos.y *= pos.w;
pos.z *= pos.w;
sprintf(strTitle,"%f %f %f / %f,%f,%f ",m_pCamera->m_vPosition.x,m_pCamera->m_vPosition.y,m_pCamera->m_vPosition.z,pos.x,pos.y,pos.z);
SetWindowText(this->GetWindowHWND(),strTitle);
I had to make some adjustments to the answers provided here. But here's the code I ended up with (Note I'm using GLM, that could affect multiplication order). nearResult is the projected point on the near plane and farResult is the projected point on the far plane. I want to perform a ray cast to see what my mouse is hovering over so I convert them to a direction vector which will then originate from my camera's position.
vec3 getRayFromScreenSpace(const vec2 & pos)
{
mat4 invMat= inverse(m_glData.getPerspective()*m_glData.getView());
vec4 near = vec4((pos.x - Constants::m_halfScreenWidth) / Constants::m_halfScreenWidth, -1*(pos.y - Constants::m_halfScreenHeight) / Constants::m_halfScreenHeight, -1, 1.0);
vec4 far = vec4((pos.x - Constants::m_halfScreenWidth) / Constants::m_halfScreenWidth, -1*(pos.y - Constants::m_halfScreenHeight) / Constants::m_halfScreenHeight, 1, 1.0);
vec4 nearResult = invMat*near;
vec4 farResult = invMat*far;
nearResult /= nearResult.w;
farResult /= farResult.w;
vec3 dir = vec3(farResult - nearResult );
return normalize(dir);
}
Multiply all your matrices. Then invert the result. Point after projection are always in the -1,1. So the four corner screen points are -1,-1; -1,1; 1,-1;1,1. But you still need to choose th z value. If you are in OpenGL, z is between -1 and 1. For directx, the range is 0 to 1. Finally take your points and transform them with the matrix
If you have access to the glu libraries, use gluUnProject(winX, winY, winZ, model, projection, viewport, &objX, &objY, &objZ);
winX and winY will be the corners of your screen in pixels. winZ is a number in [0,1] which will specify where between zNear and zFar (clipping planes) the points should fall. objX-Z will hold the results. The middle variables are the relevant matrices. They can be queried if needed.
I have 3d mesh and I would like to draw each face a 2d shape.
What I have in mind is this:
for each face
1. access the face normal
2. get a rotation matrix from the normal vector
3. multiply each vertex to the rotation matrix to get the vertices in a '2d like ' plane
4. get 2 coordinates from the transformed vertices
I don't know if this is the best way to do this, so any suggestion is welcome.
At the moment I'm trying to get a rotation matrix from the normal vector,
how would I do this ?
UPDATE:
Here is a visual explanation of what I need:
At the moment I have quads, but there's no problem
converting them into triangles.
I want to rotate the vertices of a face, so that
one of the dimensions gets flattened.
I also need to store the original 3d rotation of the face.
I imagine that would be inverse rotation of the face
normal.
I think I'm a bit lost in space :)
Here's a basic prototype I did using Processing:
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
PVector nr = PVector.add(n,new PVector());//clone normal
float rx = degrees(acos(n.dot(x)));//angle between normal and x axis
float ry = degrees(acos(n.dot(y)));//angle between normal and y axis
float rz = degrees(acos(n.dot(z)));//angle between normal and z axis
PMatrix3D r = new PMatrix3D();
//is this ok, or should I drop the builtin function, and add
//the rotations manually
r.rotateX(rx);
r.rotateY(ry);
r.rotateZ(rz);
print("original: ");println(face);
for(int i = 0 ; i < 4; i++){
PVector rv = new PVector();
PVector rn = new PVector();
r.mult(face[i],rv);
r.mult(nr,rn);
face2d[i] = PVector.add(face[i],rv);
}
print("rotated: ");println(face2d);
//draw
float scale = 100.0;
translate(width * .5,height * .5);//move to centre, Processing has 0,0 = Top,Lef
beginShape(QUADS);
for(int i = 0 ; i < 4; i++){
vertex(face2d[i].x * scale,face2d[i].y * scale,face2d[i].z * scale);
}
endShape();
line(0,0,0,nr.x*scale,nr.y*scale,nr.z*scale);
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m = new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)-n.z*s, n.x*n.z*(1-c)+n.y*s, 0,
n.x*n.y*(1-c)+n.z*s,y2+(1-y2)*c,n.y*n.z*(1-c)-n.x*s,0,
n.x*n.y*(1-c)-n.y*s,n.x*n.z*(1-c)+n.x*s,z2-(1-z2)*c,0,
0,0,0,1);
}
Update
Sorry if I'm getting annoying, but I don't seem to get it.
Here's a bit of python using Blender's API:
import Blender
from Blender import *
import math
from math import sin,cos,radians,degrees
def getRotMatrix(n):
c = cos(0)
s = sin(0)
x2 = n.x*n.x
y2 = n.y*n.y
z2 = n.z*n.z
l1 = x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s
l2 = n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s
l3 = n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c
m = Mathutils.Matrix(l1,l2,l3)
return m
scn = Scene.GetCurrent()
ob = scn.objects.active.getData(mesh=True)#access mesh
out = ob.name+'\n'
#face0
f = ob.faces[0]
n = f.v[0].no
out += 'face: ' + str(f)+'\n'
out += 'normal: ' + str(n)+'\n'
m = getRotMatrix(n)
m.invert()
rvs = []
for v in range(0,len(f.v)):
out += 'original vertex'+str(v)+': ' + str(f.v[v].co) + '\n'
rvs.append(m*f.v[v].co)
out += '\n'
for v in range(0,len(rvs)):
out += 'original vertex'+str(v)+': ' + str(rvs[v]) + '\n'
f = open('out.txt','w')
f.write(out)
f.close
All I do is get the current object, access the first face, get the normal, get the vertices, calculate the rotation matrix, invert it, then multiply it by each vertex.
Finally I write a simple output.
Here's the output for a default plane for which I rotated all the vertices manually by 30 degrees:
Plane.008
face: [MFace (0 3 2 1) 0]
normal: [0.000000, -0.499985, 0.866024](vector)
original vertex0: [1.000000, 0.866025, 0.500000](vector)
original vertex1: [-1.000000, 0.866026, 0.500000](vector)
original vertex2: [-1.000000, -0.866025, -0.500000](vector)
original vertex3: [1.000000, -0.866025, -0.500000](vector)
rotated vertex0: [1.000000, 0.866025, 1.000011](vector)
rotated vertex1: [-1.000000, 0.866026, 1.000012](vector)
rotated vertex2: [-1.000000, -0.866025, -1.000012](vector)
rotated vertex3: [1.000000, -0.866025, -1.000012](vector)
Here's the first face of the famous Suzanne mesh:
Suzanne.001
face: [MFace (46 0 2 44) 0]
normal: [0.987976, -0.010102, 0.154088](vector)
original vertex0: [0.468750, 0.242188, 0.757813](vector)
original vertex1: [0.437500, 0.164063, 0.765625](vector)
original vertex2: [0.500000, 0.093750, 0.687500](vector)
original vertex3: [0.562500, 0.242188, 0.671875](vector)
rotated vertex0: [0.468750, 0.242188, -0.795592](vector)
rotated vertex1: [0.437500, 0.164063, -0.803794](vector)
rotated vertex2: [0.500000, 0.093750, -0.721774](vector)
rotated vertex3: [0.562500, 0.242188, -0.705370](vector)
The vertices from the Plane.008 mesh are altered, the ones from Suzanne.001's mesh
aren't. Shouldn't they ? Should I expect to get zeroes on one axis ?
Once I got the rotation matrix from the normal vector, what is the rotation on x,y,z ?
Note: 1. Blender's Matrix supports the * operator 2.In Blender's coordinate system Z point's up. It looks like a right handed system, rotated 90 degrees on X.
Thanks
That looks reasonable to me. Here's how to get a rotation matrix from normal vector. The normal is the vector. The angle is 0. You probably want the inverse rotation.
Is your mesh triangulated? I'm assuming it is. If so, you can do this, without rotation matrices. Let the points of the face be A,B,C. Take any two vertices of the face, say A and B. Define the x axis along vector AB. A is at 0,0. B is at 0,|AB|. C can be determined from trigonometry using the angle between AC and AB (which you get by using the dot product) and the length |AC|.
You created the m matrix correctly. This is the rotation that corresponds to your normal vector. You can use the inverse of this matrix to "unrotate" your points. The normal of face2d will be x, i.e. point along the x-axis. So extract your 2d coordinates accordingly. (This assumes your quad is approximately planar.)
I don't know the library you are using (Processing), so I'm just assuming there are methods for m.invert() and an operator for applying a rotation matrix to a point. They may of course be called something else. Luckily the inverse of a pure rotation matrix is its transpose, and multiplying a matrix and a vector are straightforward to do manually if you need to.
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m_inverse =
new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s, 0,
n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s, 0,
n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c, 0,
0,0,0,1);
face2d[0] = m_inverse * p0; // Assuming there's an appropriate operator*().
face2d[1] = m_inverse * p1;
face2d[2] = m_inverse * p2;
face2d[3] = m_inverse * p3;
// print & draw as you did before...
}
For face v0-v1-v3-v2 vectors v3-v0, v3-v2 and a face normal already form rotation matrix that would transform 2d face into 3d face.
Matrix represents coordinate system. Each row (or column, depending on notation) corresponds to axis coordinate system within new coordinate system. 3d rotation/translation matrix can be represented as:
vx.x vx.y vx.z 0
vy.x vy.y vy.z 0
vz.x vz.y vz.z 0
vp.x vp.y vp.z 1
where vx is an x axis of a coordinate system, vy - y axis, vz - z axis, and vp - origin of new system.
Assume that v3-v0 is an y axis (2nd row), v3-v2 - x axis (1st row), and normal - z axis (3rd row). Build a matrix from them. Then invert matrix. You'll get a matrix that will rotate a 3d face into 2d face.
I have 3d mesh and I would like to draw each face a 2d shape.
I suspect that UV unwrapping algorithms are closer to what you want to achieve than trying to get rotation matrix from 3d face.
That's very easy to achieve: (Note: By "face" I mean "triangle")
Create a view matrix that represents a camera looking at a face.
Determine the center of the face with bi-linear interpolation.
Determine the normal of the face.
Position the camera some units in opposite normal direction.
Let the camera look at the center of the face.
Set the cameras up vector point in the direction of the middle of any vertex of the face.
Set the aspect ratio to 1.
Compute the view matrix using this data.
Create a orthogonal projection matrix.
Set the width and height of the view frustum large enough to contain the whole face (e.g. the length of the longest site of a face).
Compute the projection matrix.
For every vertex v of the face, multiply it by both matrices: v * view * projection.
The result is a projection of Your 3d faces into 2d space as if You were looking at them exactly orthogonal without any perspective disturbances. The final coordinates will be in normalized screen coordinates where (-1, -1) is the bottom left corner, (0, 0) is the center and (1, 1) is the top right corner.
I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;