a planet in openGL: vector data or texture mapping? - vector

I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?

This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.

Related

How to create 2D slices of 3D object model in Qt?

I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.

Using Geo-coordintes Instead of Cartesian to Draw in Argon and A-Frame

I would like to create a GPS drawing program in Argon and A-Frame which draws lines based upon people's movements.
Lines can be drawn in A-Frame with, for example, the meshline component which uses Cartesian points:
<a-entity meshline="lineWidth: 20; path: -2 -1 0, 0 -2 0</a-entity>
If I were to do this with a GPS device, I would take the GPS coordinates and map them directly to something like Google maps. Does Argon have any similar functionality such that I can use the GPS coordinates directly as the path like so:
<a-entity meshline="lineWidth: 20; path: 37.32299 -122.04185 0, 37.32298 -122.03224</a-entity>
Since one can specify an LLA point for a reference frame I suppose one way to do this would be to conceive of the center LLA point as "0, 0, 0" and then use a function to map the LLA domain to a Cartesian range.
It would be preferable, however, to use the geo-coordinates directly. Is this possible in Argon?
To understand the answer, you need to first understand the various frames of reference used by Argon.
First, Argon makes use of cesiumjs.org's geospatial math libraries and Entity's so that all "locations" in Argon must either be expressed geospatially OR be relative to a geospatial entity. These are rooted at the center of the earth, in what Cesium calls FIXED coordinates, but are also know as ECEF or ECF coordinates. In that system, coordinates are in meters, with up/down going through the poles, east/west going through the meridian (I believe). Any point on the surface of the earth is represented with pretty large numbers.
This coordinate system is nice because we can represent anything on or near the earth precisely using it. Cesium also supports INERTIAL coordinates, which are used to represent near-earth orbital objects, and can convert between the two frames.
But, it is inconvenient when doing AR for a few reasons:
the numbers used to represent the position of the viewer and objects near them are quite large, even if they are very close, which can lead to mathematical accuracy issues, especially in the 3D graphics system.
The coordinates we "think about" when we think about the world around us have the ground as "flat" and "up" as pointing ... well, up. So, in 3D graphics, an object above another object typically has the same X and Z values, but has a Y that's bigger. In ECEF coordinates, all the numbers change because what we perceive as "up" is really a vector from the center of the earth though us, and is only "up" if we're on the north (or south, depending on your +/-) pole. Most 3D graphics libraries you might want to use (e.g., physics libraries, for example), assume a world in which the ground is one plane (typically the XZ plane) and Y is up (some aeronautics and other engineering applications use Z as up and have XY as the ground, but the issue is the same).
Argon deals with this, as do many geospatial AR systems, by creating a local coordinate system for the graphics and application to use. There are really three options for this:
Pick some arbitrary (but fixed) local place as the origin. Some systems, which are built to work in one place, have this hard-coded. Others let the application set it. We don't do this because it would encourage applications to take the easy path and only work in one place (we've seen this in the past).
Set the local place to the camera. This has the advantage that the math is the most "accurate" because all points are expressed relative to the camera. But, this causes two issues. First, the camera tends to move continuously (even if only due to sensor noise) in AR apps. Second, many libraries (again, like physics libraries) assume that the origin of the system is stable and on the earth, with the camera/user moving through it. These issues can be worked around, but they are tedious for application developers to deal with.
Set the origin of the local coordinates to an arbitrary location near the user, and if the user moves far from it, recenter automatically. The advantage of this is the program doesn't necessarily have to do much to deal with it, and it meshes nicely with 3D graphics libraries. The disadvantage is the local coordinates are arbitrary, and might be different each time a program is run. However, the application developer may have to pay attention to when the origin is recentered.
Argon uses open 3. When the app starts, we create a new local coordinate frame at the user's location, on the plane tangent to the earth. If the user moves far from that location we update the origin and emit an event to the application (currently, we recenter if you are 5km away from the origin). In many simple apps, with only a few frames or reference expressed in geospatial coordinates (and the rest of the application data expressed relative to known geospatial locations), the conversion from geospatial to local can just be done each frame, allowing the app developer to ignore the reentering problem. The programmer is free to use either ENU (east-north-up) or EUS (east-up-south) as their coordinate system; we tend to use EUS because it's similar to what most 3D graphics systems use (Y is up, Z points south, and X is east).
One of the reasons we chose this approach is that we've found in the past that if we had predictable local coordinates, application developers would store data using those coordinates even though that's not a good idea (you data is now tied to some relatively arbitrary application-specific coordinate system, and will now only work in that location).
So, now to your question. Your issue is that you want to use geospatial (cesium's coordinates, that argon uses) coordinates in AFrame. The short answer is you can't use them directly, since AFrame is built assuming a local 3D graphics coordinate system. The argon-aframe package binds aframe to argon by allowing you to specify referenceframe components that position an a-entity at an argon/cesium geospatial location, and take care of all the internal conversions for you.
The assumption when I wrote that code was that authors would then create their content using the local, 3D graphics coordinates, and attach those hunks of graphics to a-entity's that were located in the world with referenceframe's.
In order to have individual coordinates in AFrame correspond to geospatial places, you will need to manage that yourself, perhaps by creating a component to do it for you, or (if the data is known at the start) by converting it up front.
Here's what I'd do.
Assuming you have a list of geospatial coordinates (expressed as LLA), I'd convert each to a local coordinates (by first converting from LLA to Cesium's FIXED ECEF coordinates and creating a Cesium Entity, and then calling Argon's context.getEntityPose() on that entity (which will return it's local coordinates). I would pick one geospatial location in the set (perhaps the first one?) and then subtract it's local coordinates from each of them, so that they are all expressed in local coordinates relative to that known geospatial location.
Then, I'd create an AFrame entity attached to the referenceframe of that unique geospatial entity, and create your graphics content inside of it, using the local coordinates that are expressed relative to it. For example, let's say the geospatial location is LongLat = "-84.398881 33.778463" and you stored those points (local coordinates, relative to LongLat) in userPath, you could do something like this:
<ar-scene>
<ar-geopose id="GT" lla=" -84.398881 33.778463" userotation="false">
<a-entity meshline="lineWidth: 20; path: userPath; color: #E20049"></a-entity>
</ar-geopose>
</ar-scene>

How does a non-tile based map works?

Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html

Generating 3D TV stereoscopic output programmatically

Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy

3D Software Renderer with VB6

I am IT student and I have to make a project in VB6, I was thinking to make a 3D Software Renderer but I don't really know where to start, I found a few tutorials but I want something that goes in depth with the maths and algorithms, I will like something that shows how to make 3D transformations, Camera, lights, shading ...
It does not matter the programing language used, I just need some resources that shows me exactly how to make this.
So I just want to know where to find some resources, or you can show me some source code and tell me where to start from.
Or if any of you have a better idea for a VB6 project.
Thanks.
I disagree with the previous posts, a 3D renderer is actually pretty simple. A high-quality 3D renderer is hard however.
Get a bunch of 3D data, triangles are simplest.
Learn about homogenous coordinates and the great 4x4 matrix for transforms.
Define a camera by a position and a rotation (expressed in the 4x4 matrix).
Transform your 3D geometry by this camera.
Perform the perspective divide and scale to your window. This converts your 3D data to 2D.
Render the data as 2D.
Now you're going to lose out on a depth buffer, so stick to wireframes in the beginning. :-)
Don't listen to these nay-sayers, go out and have some fun!
Many years ago I made a shaded triangle renderer that used library calls to draw the triangles. It's a rather naive approach but you would be able to achieve the same result using VB6. I got all the maths & techniques from "Computer Graphics principles and practice" by Foley et al. Some parts are out of date now but I think you'd find it very helpful for this project and it can be bought 2nd hand at reasonable prices from Amazon for example.
One simple approach could be:
Read model file as triangles
Transform each triangle using matrices to account for camera position
Project triangle points onto 2D
Draw 2D triangle (probably using GDI)
This covers wireframe viewing. To extend this to hidden surface removal you need to work out which triangles are in front. Two possible ways:
Z-order sorting the triangles and drawing the ones furthest from the camera first. This is simple but inefficient if there are a lot of triangles and can give overlapping triangle effects when the order is not quite correct. You also have to decide how to sort the triangles - e..g by centroid, by extents...
Using a software depth buffer. This will give better results but is more work to implement. You will have to write your own triangle drawing code so cannot rely on GDI. See bresenham's line algorithm and related algorithms for doing filled triangles for how to do this.
After this you'd also need some kind of shading based on lighting. The calculations are covered in Computer Graphics principles and practice. For simple shading you can stick with drawing triangles using gdi , but if you want to do gouraud or phong shading the colour values vary across a triangle. One way around this is to sub-divide the triangle into smaller triangles, but this is inefficient and won't give very nice looking results. Better would be to draw the triangles yourself as required above for the software depth buffer.
A good extension would be to support primitives other than triangles. Basic approach would be to split primitives into triangles as you read them.
Good luck - could be an interesting project.
VB6 is not the best suited language to do maths and 3D graphics, and given that you have no previous knowledge about the subject either, I would recommend you to choose something different (and easier).
As it's Visual Basic, you could try something more form-oriented, that is the original intent of the language.
There is the 3D engine list which lists three engine in pure basic (an oxymoron) + Source code and of them one is in Visual Basic (Dex3D)
DeX3D is an open source 3D engine
coded entirely in Visual Basic from
Jerry Chen ( -onlyuser#hotmail.com ).
Gouraud shading
Transparency
Fogging
Omni and spot lights
Hierarchical meshes
Support for 3D Studio files
Particle systems
Bezier curve segments
2.5 D text
Visual Basic source
More information, screenshots and the
source can be found on the Dex3D
Homepage. (<= Dead Link)
EGL25 by Erkan Sanli is a fast open source VB 6 renderer that can render, rotate, animate, etc. complex solid shapes made of thousands of polygons. Just Windows API calls – no DirectX, no OpenGL.
VBMigration.com chose EGL25 as a high-quality open-source VB6 project to demonstrate their VB6 to VB.Net upgrade tool.
A 3D software renderer as a whole project is fairly complex if you've never done it before. I would suggest something smaller - like just doing the 3D portion and using lines to do the rendering OR just write a shaded triangle renderer (which is the underpinnings of 3D renderers anyway).
Something a little simpler rather than trying to write a full-blown 3D software renderer on the first go - especially in VB.
A software renderer is a very difficult project and the language VB6 is not indicated at all ( for a task like this c++ is the way.. ), anyway I can suggest you some great books I used:
Shaders: http://wiki.gamedev.net/index.php/D3DBook:Introduction_%28Volume%29
Math: 3D Math Primer for Graphics and Game Development
There are other 2 books. Even if they are for VB.NET you can find some useful code:
.NET Game Programming with DirectX 9.0
Beginning .NET Game Programming in VB .NET
I think you can take two ways either go the Direct X way and use DirectX 8 that has VB 5-6 support. I found a page http://www.gamedev.net/reference/articles/article1308.asp
You can always write a engine group up but by doing so you will need some basic linear algebra like Frank Krueger suggests.

Resources