JavaFX Depth Testing in 3d Scene leading to Z-fighting - javafx

I've created an app that takes DTED positional data and creates a basic contour mesh. With depth testing enabled this works fine and I don't have issues simply rendering the terrain.
The problem I've run into is that when I place objects on the terrain surface I get a lot of z-fighting causing visual corruption in the boxes/spheres. Does there exist any way to mitigate this besides modifying nearclip/farclip?
I've tried using a nearclip of .1 and a farclip of 5000 and I still suffer a lot of flicker. Keep in mind my terrain may be 100k units wide so I want to keep my farclip high enough to view the entire terrain at once. I've gone through every question related to the depth buffer in FX and have not yet found anything to help mitigate it besides near/farclip settings.

I had the same problem with JavaFX. Your solution worked for me, but I was curious why that was, especially because I found someone with the same problem, but in Autodesk Maya.
The reason behind this, of course, is depth buffer precision. As the zFar plane moves away, or the zNear plane comes closer, too much precision is lost. This affects the zNear plane much more than the zFar plane.
A good explanation is on khronos' OpenGL wiki: Depth Buffer Precision
TL;DR: Always move the zNear plane as far away as possible and the zFar plane as near as possible.

After a lot of tinkering I found that adjusting my farclip never helped reduce flicker. However if I set my nearclip to 1 I can have a farclip of 500k with virtually no flicker.

Related

How do I adapt AStar in Godot to platformers?

I've been looking for a robust method of pathfinding for a platformer based game I'm developing and A* looks like it's the best method available. I noticed there is a demo for the AStar implementation in Godot. However, it is written for a grid/tile based game and I'm having trouble adapting that to a platformer where the Y axis is limited by gravity.
I found a really good answer that describes how A* can be applied to platformers in Unity. My question is... Is it possible to use AStar in Godot to achieve the same thing described in the above answer? Is it possible this could be done better without using the built in AStar framework? What is a really simple example of how it would work (with or without AStar) in GDscript?
Though I have already posted a 100 point bounty (and it has expired), I would still be willing to post another 100 point bounty and award it, pending an answer to this question.
you could repurpose the Navigation2D node for platformer purposes. The picture below shows an example usage. The Navigation2D node makes it possible to navigate the shortest path between two point that lie within the combined navigation polygon (this is the union of all NavigationPolygonInstances).
You can use the get_simple_path method to get a vector2 array that describes the points your agent/character should try to reach (or get close to, by using some predefined margin) in sequence. Place each point in a queue, and move the character towards the different points by moving it horizontally. Whenever your agent's next point in the queue is too high up to reach, then you can make the agent jump.
I hope this makes sense!
The grey/dark-blue rectangles are platforms with collision whereas the green shapes are NavigationPolygonInstance nodes
This approach is by no means perfect. If you were to implement slopes into your game then the agent may jump up the slope instead of ascending it normally. It is also pretty tedious to create all the shapes needed.
A more robust solution would be to have a custom graph system that you could place in the scene and position its vertices. This opens up the possibility to make one-way paths and have certain edges/connections between vertices marked as "jumpable" only. This is a lot more work though if you can not find any such solution online.

How to generate recursive shapes like geokone.net using GL.Begin?

http://app.geokone.net/ is an open source javascript app for generating shapes (if you can look at it, it's really fast, for 5 seconds, I'm sure you'll get the idea).
It's hard for me to go through it because it's a lot of code, what is the general idea?
also, I need those shapes as GameObject with polygon collider around them (anything from 0 to 20 of them on the screen at the same time, could be different shapes also), is it even possible with GL?
would GL help me? I think GL would be fast for just 1 shape or something (as it's using recursion), but for what I want, I think drawing them in real time to a texture, then using the texture as a sprite would be faster (as I can save the sprite for shapes that are the same), or maybe I should use a shader? any other method that you can think of?
and for the algorithm itself, what is the general idea?
You don't want to use GL, look into custom mesh generation with MeshFilter. It is required for the colliders anyway.
Meshes have to be updated just once and probably will be faster than any optimisation you proposed. You might need a shader to draw it, though.
As for the algorithm, I'm afraid you have to look into it yourself or hire someone for it. StackOverflow is for helping with issues, not doing the work for you. If you need a hint, look into basic fractals

Coordinate System of Flame Fractals

I have been doing some research on flame fractals in preparation of creating my own flame fractal generator. I just have one question: What coordinate system is used in the flame fractal algorithm?
Is it like the Mandelbrot Set with complex numbers, or is it a real number system?
Additionally, what is an optimal range to graph the flame fractals within (i.e. Mandelbrot uses (x-> -2 to 2),(y-> -2i to 2i))?
Original Article about flame fractals (22Mb PDF)
The coordinate system in Apophysis, flam3, and other implementations use (x,y), or (x,y,z).
if it has 3d hack. However, some variations interpret (x,y) as if it was a complex number, for example, the mobius variation, or the julia variation.
The exact details on how the math is done is hard to understand, nobody really knows,
since the existing code is very old, and has been developed by many people.
I have, for example, experienced some problems related to the y coordinate behaving strangely.
EDIT: Ah, Apophysis and flam3 uses sort of a camera function, which has a center point, rotation, and magnification. The center point is what will be mapped to the middle of the screen, and the rest, you'll be able to figure out.
I am actually coding on a Java implementation, which can be found here: http://sourceforge.net/p/flamethyst/home/Home/
Browse the code for details on camera, coordinates, etc.
To answer your specific question, I believe that an error in the source code caused the y coordinate to flipped in one of the transforms so that the negative y axis extends upward and the positive y axis extends downward.
To answer your actual question about where to find information about the aweful mess that is the apophysis codebase, the secret place on the internet where most of the experts in how apophysis actually works is a deviantart chatroom at chat.deviantart.com/chat/aposhack. It requires that you sign up for a deviantart account. In the chat, there are several people labeled 'wizards' who either work with the source code, have gotten sick of the source code and are writing their own flame generators, or are Thomas Ludwig, creator of Chaotica, which is a fractal flame renderer that does not have many of the bugs and mathematical issues as apophysis.
If you are still working on a flame generator, I invite you to stop by and talk fractals with us.

VideoMaterial appears pixellated in Away3D

I'm working on a spherical movie viewer in Away3D & am having a problem when I apply a VideoMaterial texture to a 3D primitive. The video appears heavily pixellated, like it's being scaled or hugely compressed. When I apply a BitmapMaterial of a single still image from the video it appears fine, so I don't think the resolution of the video is the problem.
I found [this discussion][1] suggesting a solution by specifying the "fixedHeight" & "fixedWidth" when I call the constructor, but those arguements seem to have no effect, and I can't find them in the API either. I do see something called "lockH" & "lockW," [in the API][3], but I they don't seem to have any effect either.
Here's the code constructing the VideoMaterial.
//basic intro setup stuff and then...
var videoURL:String = "assets/clip.flv";
this.primitive = new Sphere({material:"blue:#cyan", radius:50000, rotationX:100, segmentsW:30, segmentsH:30});
//more code to setup the rest of the scene, and implement some texture switching, then...
this.primitive.material = new VideoMaterial({file:videoURL, lockH:1000, lockW:2000});
For reference, I'm building off this example as a starting point, and I'm using Away3D 3.6 & Flex 4.5.1 in Eclipse Indigo.
[1]:
[3]:
To get rid of the pixelation, set smooth to true. This will obviously not increase the resolution, but it will activate anti-aliasing, the same way that smoothing=true on a native BitmapData does (internally that's exactly what it does.)
If you are going to use a video or bitmap material on a sphere that is used as an environment in a full-screen view, you will need to have a really high resolution video/bitmap. At any one time you can only see at most a third of the sphere surface, and it covers a screen area of more than 1000 pixels in width, so that tells me that your video will need to be at least 3000 pixels wide for it not to suffer from stretch issues.
I'm afraid to say that this is 'normal'. It mostly has to do with the efficiency of actionscript code and the lack of hardware acceleration and anti-aliasing. It's essentially impossible to do a transform of your video onto a primitive without having some sort of loss in quality because frankly, actionscript isn't really made for this kind of intense calculations.
With that said however, there is hope. There's a new Flash Player coming out "soonish" (or so I've heard) that will have a basic hardware accelerated 3D renderer (codename "Molehill") that Away3d and other 3d engines (like Alternativa) is hard at work implementing already. This would mean that the video would then be anti-aliased and should therefor be smooth, but I can't confirm this since I've never tried.

What are the options and best practices for PV3D inspired modeling

The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.

Resources