I know that if I rotate an object, which extends javafx.scene.shape.Shape, I can transform it into 3D space, even though it was primarily designed to be in 2D (at least as far as I know).
Let's say I have a 3D scene (perspective camera and depth buffer are used), where various MeshViews occur. Some are used for areas, others for lines. In both cases those shapes must be triangulated in order to draw them with a TriangleMesh, which is often nontrivial.
Now when I change the drawing of these lines to use the Polyline class, the performance collapse is horrible and there a strange artefacts. I thought I could benefit from the fact, that a polyline has less vertices and the developer doesn't have to triangulate programmatically.
Is it discouraged to use shapes extending javafx.scene.shape.Shape within 3D space? How're they drawn internally?
If the question is "should I use 2D Shape objects in 3D space?" in JavaFX then the answer is No because you will get all terrible performance that you are seeing. However it sounds like you are trying to make up for JavaFX's lack of a 3D PolyLine object using 2D objects and rotating them in 3D space. If that is true, instead use the free open-source F(X)yz library:
http://birdasaur.github.io/FXyz/
For example the PolyLine3D class which allows you to simply specify a list of Point3Ds and it will connect them for you:
/src/org/fxyz/shapes/composites/PolyLine3D.java
and you can see example code on how to use it in the test directory:
/src/org/fxyz/tests/PolyLine3DTest.java
Related
I'm looking to auto-generate a UML class model in virtual reality using A-Frame.io (or another technology) by passing in values. Has anyone ever done something similar in the past? Not sure where to start.
Thanks
You might want to look into plantuml which is a nice UML generator. Most of it's diagrams are generated as input to graphviz's dot. Dot is a layout engine - it takes a list of nodes and connections and puts them into 2D space and then renders them to one of it's output formats - or just returns the graph, but this time with coordinates on where to draw what. You could meddle with this data and render the elements with volume but on a 2D plane with dot generated coordinates. Perhaps even you could modify it to place them in 3D space instead of a plane.
Or you could just render the plantuml output on a 2D plane, place it in 3D space and it would probably be good enough. There are even online generators for plantuml.
I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.
Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html
I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?
This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.
How to blur 3d object? (Papervision 3d) And save created new object as new 3d model? (can help in sky/clouds generation)
Like in 2d picture I've turn rectangel intu some blury structure
(source: narod.ru)
Set useOwnContainer to true the add the filter:
your3DObject.useOwnContainer = true;
your3DObject.filters = [new BlurFilter(4,4,2)];
When you set useOwnContainer to true, a new 2d DisplayObject is created to render the 3d projection into, and you can apply of the usual DisplayObject properties to that.
Andy Zupko has a good post about this and render layers.
Using this will cost your processor a bit, so use it wisely. For example
in the twigital I worked on at disturb media we used one Glow for
the layer that holds all the characters, not inidividual render layers for each
character. On other projects we 'baked' the filters into bitmaps and used them,
this meant a bit more memory, but freed up the processor a bit for other tasks.
HTH
I'm not familiar with Papervision 3D, but blurring in 3D is normally just blurring in 2D. You pick the object you want blurred, determine the blurring you want for that object, then apply a 2D blur before compositing other objects into the scene.
This is a cheat because in principle, different parts of the object may need different degrees of (depth of field) blurring. But it's not the only cheat in 3D graphics.
That said, there are other approaches. Ray-tracing can give true depth-of-field effects (if you're willing to pay the render-time costs). It's also possible to apply a blur to a 3D "voxel" grid instead of a 2D pixel grid - though I imagine that's more useful for smoothing shapes from e.g. medical scanners than for giving depth-of-field effects.
Blur is 2D operation, try to render object into texture and blur that texture.