Convert 4 corners into a matrix for OpenGL to Direct3D sprite conversion - math

I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?

It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.

Related

Why doesn't Vulkan use the "standard cartesian" coordinate system?

Vulkan uses a coordinate system where (-1, -1) is in the top left quadrant, instead of the bottom left quadrant as in the standard cartesian coordinate system one typically learns about in school. So (-1, 1) is in the bottom left quadrant in Vulkan's coordinate system.
(image from: http://vulkano.rs/guide/vertex-input)
What are the advantages of using Vulkan's coordinate system? One plain advantage I can see is pedagogical: it forces people to realize that coordinate systems are arbitrary, and one can easily map between them. However, I doubt that's the design reason.
So what are the design reasons for this choice?
Many coordinate systems in computer graphics put the origin at the top-left and point the y axis down.
This is because in early televisions and monitors, the electron beam that draws the picture starts at the top-left of the screen and progresses downward.
The pixels on the screen were generally made by reading memory in sequential addresses as the beam moved down the screen, and modulating that electron beam in accordance with each byte read in sequence. So the y axis corresponds to time, which corresponds to memory address.
Even today, virtually all representations of a bitmap in memory, or in a bitmapped file, start at the top-left.
It is natural when drawing bitmaps in such a medium to use a coordinate system that starts at the top-left too.
Things become a little more complicated when you use a bottom-left origin because finding the byte that corresponds to a pixel requires a little more math and needs to account for the height of the bitmap. There is usually just no reason to introduce the extra complexity.
When you start to introduce matrix transformations however, it becomes much more convenient to work with an upward-pointing y axis, because that lets you use all the vector algebra you learned in school without having to reverse the y axis and all the rotations in your thinking.
So what you'll usually find is that when you are working in a system that lets you do matrix operations, translations, rotations, etc., then you will have an upward-pointing y axis. At some point deep inside, however, the calculations will transform the coordinates into a downward-pointing y axis for the low-level operations.
One of the common sources of confusion and bugs in OpenGL was that NDC and window coordinates had y increasing upwards, which is opposite of the convention used in nearly all window systems and many (but not all) image formats, where y is [0..1] increasing downwards. Developers ended up having to insert a y-flip in their transformation pipeline in many cases, and it wasn't always clear when they did and didn't.
So Vulkan decided to make it so you could load an image from a y-downwards image format directly into memory and draw it to the screen without any explicit y flips, to avoid this source of errors.
Other coordinate systems were then chosen to be consistent with that, in the sense that the y direction never flips direction in the standard Vulkan transformation pipeline. That meant that clip space vertex coordinates also had y increasing downwards.
This ended up meaning that Vulkan clip coordinates have a different orientation than D3D clip coordinates, which was an annoyance for developers supporting both APIs. So the VK_KHR_maintenance1 extension adds the ability to specify a negative viewport height, which essentially introduces a y-flip to the clip-space to framebuffer coordinate transform. (D3D has essentially always had an implicit y-flip here.)
This is how I remember the reasoning in the Vulkan Working Group, anyway. I don't think there's an authoritative public source anywhere.

JavaFX 2D shapes in 3D space

I know that if I rotate an object, which extends javafx.scene.shape.Shape, I can transform it into 3D space, even though it was primarily designed to be in 2D (at least as far as I know).
Let's say I have a 3D scene (perspective camera and depth buffer are used), where various MeshViews occur. Some are used for areas, others for lines. In both cases those shapes must be triangulated in order to draw them with a TriangleMesh, which is often nontrivial.
Now when I change the drawing of these lines to use the Polyline class, the performance collapse is horrible and there a strange artefacts. I thought I could benefit from the fact, that a polyline has less vertices and the developer doesn't have to triangulate programmatically.
Is it discouraged to use shapes extending javafx.scene.shape.Shape within 3D space? How're they drawn internally?
If the question is "should I use 2D Shape objects in 3D space?" in JavaFX then the answer is No because you will get all terrible performance that you are seeing. However it sounds like you are trying to make up for JavaFX's lack of a 3D PolyLine object using 2D objects and rotating them in 3D space. If that is true, instead use the free open-source F(X)yz library:
http://birdasaur.github.io/FXyz/
For example the PolyLine3D class which allows you to simply specify a list of Point3Ds and it will connect them for you:
/src/org/fxyz/shapes/composites/PolyLine3D.java
and you can see example code on how to use it in the test directory:
/src/org/fxyz/tests/PolyLine3DTest.java

a planet in openGL: vector data or texture mapping?

I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?
This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.

perspective correction of texture coordinates in 3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

3D Software Renderer with VB6

I am IT student and I have to make a project in VB6, I was thinking to make a 3D Software Renderer but I don't really know where to start, I found a few tutorials but I want something that goes in depth with the maths and algorithms, I will like something that shows how to make 3D transformations, Camera, lights, shading ...
It does not matter the programing language used, I just need some resources that shows me exactly how to make this.
So I just want to know where to find some resources, or you can show me some source code and tell me where to start from.
Or if any of you have a better idea for a VB6 project.
Thanks.
I disagree with the previous posts, a 3D renderer is actually pretty simple. A high-quality 3D renderer is hard however.
Get a bunch of 3D data, triangles are simplest.
Learn about homogenous coordinates and the great 4x4 matrix for transforms.
Define a camera by a position and a rotation (expressed in the 4x4 matrix).
Transform your 3D geometry by this camera.
Perform the perspective divide and scale to your window. This converts your 3D data to 2D.
Render the data as 2D.
Now you're going to lose out on a depth buffer, so stick to wireframes in the beginning. :-)
Don't listen to these nay-sayers, go out and have some fun!
Many years ago I made a shaded triangle renderer that used library calls to draw the triangles. It's a rather naive approach but you would be able to achieve the same result using VB6. I got all the maths & techniques from "Computer Graphics principles and practice" by Foley et al. Some parts are out of date now but I think you'd find it very helpful for this project and it can be bought 2nd hand at reasonable prices from Amazon for example.
One simple approach could be:
Read model file as triangles
Transform each triangle using matrices to account for camera position
Project triangle points onto 2D
Draw 2D triangle (probably using GDI)
This covers wireframe viewing. To extend this to hidden surface removal you need to work out which triangles are in front. Two possible ways:
Z-order sorting the triangles and drawing the ones furthest from the camera first. This is simple but inefficient if there are a lot of triangles and can give overlapping triangle effects when the order is not quite correct. You also have to decide how to sort the triangles - e..g by centroid, by extents...
Using a software depth buffer. This will give better results but is more work to implement. You will have to write your own triangle drawing code so cannot rely on GDI. See bresenham's line algorithm and related algorithms for doing filled triangles for how to do this.
After this you'd also need some kind of shading based on lighting. The calculations are covered in Computer Graphics principles and practice. For simple shading you can stick with drawing triangles using gdi , but if you want to do gouraud or phong shading the colour values vary across a triangle. One way around this is to sub-divide the triangle into smaller triangles, but this is inefficient and won't give very nice looking results. Better would be to draw the triangles yourself as required above for the software depth buffer.
A good extension would be to support primitives other than triangles. Basic approach would be to split primitives into triangles as you read them.
Good luck - could be an interesting project.
VB6 is not the best suited language to do maths and 3D graphics, and given that you have no previous knowledge about the subject either, I would recommend you to choose something different (and easier).
As it's Visual Basic, you could try something more form-oriented, that is the original intent of the language.
There is the 3D engine list which lists three engine in pure basic (an oxymoron) + Source code and of them one is in Visual Basic (Dex3D)
DeX3D is an open source 3D engine
coded entirely in Visual Basic from
Jerry Chen ( -onlyuser#hotmail.com ).
Gouraud shading
Transparency
Fogging
Omni and spot lights
Hierarchical meshes
Support for 3D Studio files
Particle systems
Bezier curve segments
2.5 D text
Visual Basic source
More information, screenshots and the
source can be found on the Dex3D
Homepage. (<= Dead Link)
EGL25 by Erkan Sanli is a fast open source VB 6 renderer that can render, rotate, animate, etc. complex solid shapes made of thousands of polygons. Just Windows API calls – no DirectX, no OpenGL.
VBMigration.com chose EGL25 as a high-quality open-source VB6 project to demonstrate their VB6 to VB.Net upgrade tool.
A 3D software renderer as a whole project is fairly complex if you've never done it before. I would suggest something smaller - like just doing the 3D portion and using lines to do the rendering OR just write a shaded triangle renderer (which is the underpinnings of 3D renderers anyway).
Something a little simpler rather than trying to write a full-blown 3D software renderer on the first go - especially in VB.
A software renderer is a very difficult project and the language VB6 is not indicated at all ( for a task like this c++ is the way.. ), anyway I can suggest you some great books I used:
Shaders: http://wiki.gamedev.net/index.php/D3DBook:Introduction_%28Volume%29
Math: 3D Math Primer for Graphics and Game Development
There are other 2 books. Even if they are for VB.NET you can find some useful code:
.NET Game Programming with DirectX 9.0
Beginning .NET Game Programming in VB .NET
I think you can take two ways either go the Direct X way and use DirectX 8 that has VB 5-6 support. I found a page http://www.gamedev.net/reference/articles/article1308.asp
You can always write a engine group up but by doing so you will need some basic linear algebra like Frank Krueger suggests.

Resources