Mapping points in 2d space to a sphere - math

I have a bunch of points in a rectangular x/y space which I would like to project onto a sphere. As in, I am trying to write this function:
function point_on_sphere(2dx:Number, 2dy:Number) : Vector3D
{
//magic
return new Vector3D(3dx, 3dy, 3dz);
}
I have been trying to first plot the points on to a cylinder and then map those points to a sphere as directed by this wikipedia page. However, those formulas assume a constant z=0, which doesn't really do what I want.
I'm using actionscript 3 / flex, but any pseudo code or pushes in the right direction would be greatly appreciated.
Just to clarify: I'm not trying to apply a texture to a sphere object, but rather to place objects along an imaginary sphere.

There is no one right answer. You can choose different approaches based on how you want to place the objects along the sphere.
Is it OK for the objects to get nearer and nearer to each other as you get closer to the sphere's "poles"? Why wouldn't the normal texture-mapping projection actually work for you?

Related

Even distribution of objects on circle

I'm creating a circle dynamically based on two positions; the first is the centre, and second is the radius. I'm then determining how many of my objects will fit onto the perimeter of the circle by int fillCount = Mathf.RoundToInt(circumference / sizeOfObject);
So now I've got a circle, I know how many things will fit on it, I know the circumference, I know the radius, and I've cobbled together a function that will find the Vector3 of any given point on the circle.
What I can't figure out is how to evenly distribute each object. i.e, place an object every 'n' degrees. I don't think I need to worry about arc-reparameterization, or do I? I think the answer lies in finding the degree of placement by using the inverse cosine/sine, but I'm not sure and my maths isn't that good.
If anyone has any input or advice, I'd greatly appreciate it.
If you know how many objects are in the circle, you can divide 360deg by the number of objects to determine the angle on the circle each object will be placed. Since we know the point on a circle is located at x=r*cos(angle), y=r*sin(angle) you can add this Vector2 to the Vector2 center of your circle.
As an example, if you have 3 objects to place around the circle. Get your angle by dividing 360/3 = 120. Each object will be placed 120deg from each other.
Pick your starting point (we will pick 0 degrees),
Vector2(r*Mathf.Deg2Rad*cos(0), r*Mathf.Deg2Rad*sin(0)).
The next object will be at 120deg, which will be
Vector2(r*Mathf.Deg2Rad*cos(120), r*Mathf.Deg2Rad*sin(120)).
Similarly, the 3rd object will be at 240deg,
Vector2(r*Mathf.Deg2Rad*cos(240), r*Mathf.Deg2Rad*sin(240))

2d point to 3d point on a sphere

i haven’t been entirely sure what to google or search for to help solve my problem, really hoping someone here can help a little…
currently i have a 3d scene, it has a massive sphere with a texture mapped to it and the camera at the center of the sphere, so it’s much like a qtvr viewer.
i’d like a way to click on the polygons within the sphere and update the texture at that position with something and dot etc..
the only part of the process where i need help is converting the 2d mouse position to a point on the inside of the sphere.
hope this makes sense…
fyi, im only looking for a pure math solution..
The first thing you need to do is convert the screen coordinate into a line in 3d space. This will pass through the point you click and your eyepoint.
Once you have this line you can then intersect this line with your sphere to find the intersection point on the sphere.
You may get 2d coordinates of the polygons (triangles?) that are making up the sphere and then find the one that contains the mouse pointer point.

Generating a 3D prism from any 2D polygon

I am creating a 2D sprite game in Unity, which is a 3D game development environment.
I have constrained all translation of objects to the XY-plane and rotation to the Z-axis.
My problem is that the meshes that are used to detect collisions between objects must still be in 3D. I have the need to detect collisions between the player object (a capsule collider) and a sprite (that has its collision volume defined by a polygonal prism).
I am currently writing the level editor and I have the need to let the user define the collision area for any given tile. In the image below the user clicks the points P1, P2, P3, P4 in that order.
Obviously the points join up to form a quadrilateral. This is the collision area I want, however I must then convert that to a 3D mesh. Basically I need to generate an extrusion of the polygon, then assign the vertex winding and triangles etc. The vertex positions is not a problem to figure out as it is merely a translation of the polygon down the z-axis.
I am having trouble creating an algorithm for assigning the winding order of the vertices, especially since the mesh must consist only of triangles.
Obviously the structure I have illustrated is not important, the polygon may be any 2d shape and will always need to form a prism.
Does anyone know any methods for this?
Thank you all very much for your time.
A simple algorithm that comes to mind is something like this:
extrudedNormal = faceNormal.multiplyScale(sizeOfExtrusion);//multiply the face normal by the extrusion amt. = move along normal
for each(vertex in face){
vPrime = vertex.clone();//copy the position of each vertex to a new object to be modified later
vPrime.addSelf(extrudedNormal);//add translation in the direction of the normal, with the amt. used in the
}
So the idea is basic:
clone the face normal and move it in
the same direction by the amt. you
want to extrude by
clone the face vertices and move them
using the moved(extruded) normal
position
For a more complete, feature rich example, refer to the Procedural Modeling Unity samples. They include a nice Mesh extrusion sample too (see ExtrudedMeshTrail.js which uses MeshExtrusion.cs).
Goodluck!
To create the extruded walls:
For each vertex a (with coordinates ax, ay) in your polygon:
- call the next vertex 'b' (with coordinates bx, by)
- create the extruded rectangle corresponding to the line from 'a' to 'b':
- The rectangle has vertices (ax,ay,z0), (ax,ay,z1), (bx,by,z0), (bx,by,z1)
- This rectangle can be created from two triangles:
- (ax,ay,z0), (ax,ay,z1), (bx,by,z0) and (ax,ay,z1), (bx,by,z0), (bx,by,z1)
If you want to create a triangle strip instead, it's even simpler. For each vertex a, just add (ax,ay,z0) and (ax,ay,z1). Whichever vertex you processed first will also need to be processed again after looping over all other vertices.
To create the end-caps:
This step is probably unnecessary for collision purposes. But, one simple technique is here: http://www.siggraph.org/education/materials/HyperGraph/scanline/outprims/polygon1.htm
Each resulting triangle should be added at depth z0 and z1.

Detect Shapes in an array of points

I have an array of points. I want to know if this array of point represents a circle, a square or a triangle.
Where should i begin? (i use C#)
Thanks
Jon
Depending on your problem, a good approach for this problem may be to use the Hough transform and all its derived algorithm
It consists in a transformation of the image space to an other space where the coordinate represents the objects parameters (angle and initial point for a line, coordinates of the center and radius for a circle)
The algorithm transforms each point of your array of points in points in the other space. Then you have to search in the new space if some points are prevailing. From these points, you will get the parameters of your object.
Of course, you need to do it once to recognize the lines (so you will know how many lines are in your bitmap and where they are) and to it to recognize the circles (it is not exactly the same algorithm)
You may have a look to this lecture (for Hough Circle Transform), but you could easily find the algorithm for line
EDIT: you can also have a look to these answers
Shape recognition algorithm(s)
Detecting an object on the image based on geometrical form
imagine it is each of these one-by-one and try to fit each of these shapes on the data.. for a square, you could find the four extreme points, and try charting out a square that goes through all of them..
Once you have got a shape in place.. you could measure the distance between each of the points and the part of the shape that is nearest to it.. then square these distances and add them up.. the shape which has the smallest sum-of-squares is probably your best bet
Use the Hough Transform.
I'm going to take a wild stab and say if you have 3 points the shape represents a triangle, 4 points is some kind of quadrilateral, any more than that is a circle.
Perhaps there's more information to your problem you could provide.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources