Project a grid in screenspace on the world xz plane - math

I want to project a grid on the xz-plane like shown here:
To do that, I created a vertex grid with x and z range [-1|1]. In the shader I multiply the xz screen coordinate of a vertex with the inverse of the View-Projection matrix. Then I want to adjust the height, depending on the new world xz coordinates and finally I transform these coordinates back to screenspace by multiplying them with the View-Projection matrix.
I dont know why, but I get a very strange plane shown on the screen. Are the mathematical oprations I use correct?

The grid that you initially create, is that in projection space or actual screen co-ords? It sounds like it is in projection space since you only transform it with the inverse of the view-projection matrix to get into world co-ords. I think you need to include the "Window" matrix too i.e. transform them by the inverse of the View-Projection-Window matrix (and similarly on the way back to screen co-ords).
Edit:
I'm probably not understanding exactly what it is you're trying to do so here's some questions back. :)
Are you trying to take the grid that's shown in the screenshot in your question and project that onto world z-x co-ordinates? If so, then why do you start with a grid of z-x values? Also, if you apply an inverse view matrix to those then surely you would end up with a line since the camera looks along z although your second screenshots show that you are getting a plane. I'm a bit confused.

Related

How to calculate a random point inside a cube

I'm trying to figure out the math to find a random point inside a cube.
I have something small but it can't take into account the rotation of the cube.
Here are some images of my results.
Here you can see the cube is rotated to some degree but when I generate some points it retains the shape as if the cube was normal (I think the term is called axis aligned but I'm not sure).
I'm using a Vector to represent the extent of the cube but for the life of me I can't figure out how to get the points to follow it when it's rotated.
Can someone point me in the right direction as to how I would do this?
EDIT1:
Now its misaligned and it goes even weirder when I rotate it sideways.
Can someone walk me through it from the beginning? I think my base line math is all wrong to begin with.
Generate the points in the straight position then apply the rotation (also check the origin of the coordinates).

Find point in 3D plane

I have four points in a 3D space, example:
(0,0,1)
(1,0,1)
(1,0,2)
(0,0,2)
Then I have a 2D position on that square plane:
x = 0.5
y = 0.5
I need to find out the 3D space point of that position in the plane. In this example it's easy: (0.5,0,1.5), because Y is zero. But imagine that Y was not zero (and not all the same), that the plane is leaning in some direction. How would I calculate the point in that case?
I imagine this should be a pretty easy thing to solve, but I can't figure it out. Please answer in programming terms and not in straight math terms, if possible.
Update with image: The gray plane (made out of two triangles) are the real one actually existing. I create a non-existing plane on top of this, the ABCD corners are exactly the same, however it doesn't slope. What I need to do is project a pixel (blue one in example) from the non-existing plane to the existing plane. It will be in the exact same location, except that it has gained a Y value from the sloping plane.
(couldn't actually make the image appear because i need 10 reputation to show it, wtf?)
What I've been able to work out so far on my own is which one of the two triangles to use in the gray plane and the normal of triangle. I basically just need to figure out how I can project the pixel.
Figured it out mostly thanks to http://gamedeveloperjourney.blogspot.com/2009/04/point-plane-collision-detection.html
Made me realize I had to verify the normal a bit closer, turns out my plane's grid was being rendered a little different than the actual coordinates for the verticles. No wonder this was so hard to get right! The pixel was projected correctly but rendered incorrectly.

Getting scan lines of arbitrary 2d triangle

How would one go about retrieving scan lines for all the lines in a 2D triangle?
I'm attempting to implement the most basic feature of a 2D software renderer, that of texture mapping triangles. I've done this more times than i can count using OpenGL, but i find myself limping when trying to do it myself.
I see a number of articles saying that in order to fill a triangle (whose three vertices each have texture coordinates clamped to [0, 1]), i need to linearly interpolate between the three points. What? I thought interpolation was between two n-dimensional values.
NOTE; This is not for 3D, it's strictly 2D, all the triangles are arbitrary (not axis-aligned in any way). I just need to fill the screen with their textures the way OpenGL would. I cannot use OpenGL as a solution.
An excellent answer and description can be found here: http://sol.gfxile.net/tri/index.html
You can use the Bresenham algorithm to draw/find the sides.
One way to handle it is to interpolate in two steps if you use scanline algorithm. First you interpolate the value on the edges of the triangle and when you start drawing the scanline you interpolate between the start and end value of that scanline.
Since you are working in 2d you can also use a matrix transformation to obtain the screen coordinate to texture coordinate. Yesterday I answered a similar question here. The technique is called change of basis in mathematics.

perspective correction of texture coordinates in 3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources