i'm working on a project where i have a cloud of points in space as input data, my goal is to create a surface.
I started by computing a regression plan for the cloud, then i projected my points on the plane using dot products :
My plane is represented by a point and a normal , i construct the axis of the plane's space using cross products then project each point on these axis.
then i triangulate in 2D (that's the point of the whole operation).
My problem is that my points now are in the plane space and i want to get them back to their inital position (inverse the transformation) to have my surface ON my points.
thank you :)
the best way is to keep the original positions and make the triangulation give you the indices rather than the positions , i hope it will help !
Related
I found myself in quite a big problem. I am average in math and I need to solve something, which is not very covered on the internet.
My problem: I have 2D space defined by X and Y. This space is just a drawing space. I want to assign to particular Xs,Ys a color with RGB values.
So let says I have 4 points with defined position in XY and color in Z:
[0,0, [255,0,0]]
[0,10, [0,255,0]]
[10,10,[0,0,255]]
[5,5, [0,0,0]]
and my drawing space is xy: 15x15.
And I want to distribute the colors to all empty points
For me its quite a delicate problem, because Z axis is basicly 3D space by itself.
My whole intention is to create a color map in which points 1,2,3,4 have between them smooth transition.
I am able to solve this in 1D where the transition is between 2 points. But I need to create 2D color map in XY drawing space based on fitted surface to these 4 points, which kind of depend both on the space of 3D-RGB and distance between them in XY drawing space.
Thanks in advance for help
You do not show any algorithm or code, so I will just explain a high-level algorithm. If you need more details or code or mathematical formulae, show more of your own work then ask. You do not explain just what you mean by "smooth transition"--there are multiple meanings. This will result in continuous shading but may not be smooth enough for your purposes.
First, given your points in the rectangular drawing space, find the Voronoi diagram for those points. This divides the drawing space into convex polygons, each polygon around one of your points.
For each vertex in the Voronoi diagram, figure which points are closest to the vertex--there will usually be just three of your points but there could be more. Then at that vertex point, assign the color that is the average of the RGB values of the nearby given points. That is, average the R values and the G values and the B values separately.
For any point on a Voronoi polygon edge, its color is the weighted average of the two colors at the endpoints. I.e. If the point is one-third of the distance from one end, its RGB value is one-third of the distance from the values at the endpoints.
Finally, for any point inside a Voronoi polygon, calculate the ray from the point that defined that polygon (the "center point") through the current point you are looking at. Find where that ray intersects the polygon. The RGB value is then the weighted average of the values of the center point and the polygon-intersection point.
The hardest part of all that is finding the Voronoi diagram. Fortune's algorithm can do this in a reasonable time. You can probably find a library to do that for you in your chosen programming language.
Another algorithm is to start with a triangulation of your given points and the corners of the drawing region. Then the color of any point in a triangle is the weighted average of the colors of the vertices. This will be automatically consistent for points on the vertices or edges of the triangles, so this is probably simpler than my previous algorithm. The difficulty here is finding a triangulation (any will do).
I'm writing a data analysis program and part of it requires finding the volume of a shape. The shape information comes in the form of a lost of points, giving the radius and the angular coordinates of the point.
If the data points were uniformly distributed in coordinate space I would be able to perform the integral, but unfortunately the data points are basically randomly distributed.
My inefficient approach would be to find the nearest neighbours to each point and stitch the shape together like that, finding the volume of the stitched together parts.
Does anyone have a better approach to take?
Thanks.
IF those are surface points, one good way to do it would be to discretize the surface as triangles and convert the volume integral to a surface integral using Green's Theorem. Then you can use simple Gauss quadrature over the triangles.
Ok, here it is, along duffymo's lines I think.
First, triangulate the surface, and make sure you have consistent orientation of the triangles. Meaning that orientation of neighbouring triangle is such that the common edge is traversed in opposite directions.
Second, for each triangle ABC compute this expression: H*cross2D(B-A,C-A), where cross2D computes cross product using coordinates X and Y only, ignoring the Z coordinates, and H is the Z-coordinate of any convenient point in the triangle (although the barycentre would improve precision).
Third, sum up all the above expressions. The result would be the signed volume inside the surface (plus or minus depending on the choice of orientation).
Sounds like you want the convex hull of a point cloud. Fortunately, there are efficient ways of getting you there. Check out scipy.spatial.ConvexHull.
This seems like a question for which an answer should readily available on the web or books but my quest for an answer has led me so far only to blind alleys that turned out to be dead ends.
I'm trying to draw 3D lines in real-time with hidden surface removal (the lines are edges of solid objects).
So I have two 3D points that were projected to 2D points using perspective projection. For each point I have computed the depth of the point. Now I want to draw the line segment that joins the 2 points, and for hidden surface removal to work I have to compute, for each intermediary 2D point on the 2D line (that results from the projection) the depth of the corresponding 3D point (the 3D point that is projected on that intermediary 2D point).
My problem is that, since the depth function isn't linear when you do perspective projection, I can't interpolate the depth of the 2 original 3D points to compute the depth of the intermediary point.
So how do I compute the depth of each point on the line with a method that's compatible with the constraints of real-time rendering?
Thanks in advance for any help.
Use homogeneous coordinates, which can be linearly interpolated in screen space: http://www.cs.unc.edu/~olano/papers/2dh-tri/
It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.
Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.