i'm working on a project where i have a cloud of points in space as input data, my goal is to create a surface.
I started by computing a regression plan for the cloud, then i projected my points on the plane using dot products :
My plane is represented by a point and a normal , i construct the axis of the plane's space using cross products then project each point on these axis.
then i triangulate in 2D (that's the point of the whole operation).
My problem is that my points now are in the plane space and i want to get them back to their inital position (inverse the transformation) to have my surface ON my points.
thank you :)
the best way is to keep the original positions and make the triangulation give you the indices rather than the positions , i hope it will help !
How can I create a gradient like the one seen in the image below using GLSL?
What is the best method for achieving a smooth transition from opaque at the edges of the polygon being drawn to transparent at its center?
The image you referred to is achieved through what is called distance transform. It is a very useful and common operation widely applied in image processing, computer vision and robot path calculation, etc. What it does is for each pixel of a image, compute the 2D Euclidean distance from the pixel to the nearest edges of the polygon. The output is an image whose pixel value indicates the minimum distance. To visualize the results, we map the distance to gray scale. Particularly, in your reference image, the ridge with bright white has the largest distance to the boundary while the dark area contains much smaller values because they are very close to the polygon boundary.
In terms of implementation, a brutal force approach is to draw a 2D image you want to transform, and in the fragment shader, compute the distance from current fragment position to every edge of a polygon and output the minimum value to an framebuffer. The geometry information of the polygon can be stored in another texture. Eventually, you ends up with a 2D texture whose pixel value encodes the shortest distance to the edges of the polygon.
You can also find this common transform implementation in OpenCV library.
I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size
I'd like to map from normalized device coordinates back to viewspace.
The other way arround works like this:
viewspace -> clip space : multiply the homogeneous coordinates by the projection matrix
clip space -> normalized device coordinates: divide the (x,y,z,w) by w
now in normalized device coordinates all coordinates which were within the view frustum fall into the cube x,y,z € [-1,1] and w=1
Now i'd like to transform some points on the boundary of that cube back into view coordinates. The projection matrix is nonsingular, so I can use the inverse to get from clipsace to viewspace. but i don't know how to get from normalized device space to clipspace, since i don't know how to calculate the 'w' i need to multiply the other coordinates with.
can someone help me with that? thanks!
Unless you actually want to recover your clip space values for some reason you don't need to calculate the W. Multiply your NDC point by the inverse of the projection matrix and then divide by W to get back to view space.
The flow graph on the top, and the formulas described on the following page, might help you : http://www.songho.ca/opengl/gl_transform.html
It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.