Infinite world generator - dictionary

I want to make an infinite 2-D world generator. I thought I would make a chunk class that represents some part of the world. I only don't know where to store the chunks, and how to generate new ones.
I thought I could store chunks in vector and remember their X,Y. For the player, I make 3x3 array (where he stands in the center) with pointers to the chunks to form the vector. When he moves up, for example, I move upper and middle row lower, and load new chunks from the vector. I dont know if this is a good idea, It was the first thing that I thought up.
Anyway, I don't have any idea how to generate chunks, so they match each other (no desert near water, aso). Even generating constant dimension map is quite hard for me (I really need infinite world).
Some time ago I generated constant dimension world using flood method, so I filled whole map with grass at the beginning, and then made random spots of water, trees and other ones, although I dont think that is usable in infinite world case.

These problems have been addressed in implementations of Conway's Life.
One way to achieve an unbounded world and only deal with the portions that are different from each other is described in discussions of the hashlife algorithm and data structures. It also deals with high performance within a chunk and good performance across chunks. Chunks that are the same are represented by one description that many chunks can point to.
http://www.drdobbs.com/jvm/an-algorithm-for-compressing-space-and-t/184406478
http://en.wikipedia.org/wiki/Hashlife
http://golly.sourceforge.net/
http://tomas.rokicki.com/hlife/
http://www-users.cs.york.ac.uk/~jowen/hashlife.html

This probably isn't the right way to do this, but may give you some ideas.
what you could do is store a 3x3 2D array of chunks or even a 5x5 array of chunks into memory. Updating the these chunks and which chunks are loaded depending on the player's position. The rest of the world can be stored within a file, which is read from and written to. When a player moves within 2-3 chunks of a un-initiated chunk, or empty chunk, generate that chunk using whatever method you want.
The tricky part of this is if you have rivers or bodies of water, forests, or any other type of landscape that spans across multiple chunks, you'll have to have a separate algorithm to handle that when generating new chunks. Will probably need to measure the intersection between two lines each time you generate. The line created by the intersection of land and water, or plains and forests, and the line that represents the edge of the chunk. Once you have that point, you can probably figure out which side of that point on the chunk needs land/water, and randomly generate it from there.
This is what I see happening in Minecraft anyways.

Related

Rendering highly granular and "zoomed out" data

There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?

Pathfinding on large, mostly walkable, non-grid based maps

I am developing a 2D space RTS game which involves spaceships navigating over large maps (think Homeworld but in 2D), which mostly consist of open space, but occasionally containing asteroid fields, space structures and other unwalkable terrain elements.
The problem I have is that using a grid-based solution with a good enough precision results in a very large amount of nodes, so pathfinding takes a long time, and considering that maps contain a lot of open space, huge walkable sections of map are represented by large amount of walkable nodes.
I have tried switching to a quad-tree representation of the map to reduce the amount of nodes in a navigation graph, but this approach yields wierd paths that involve ships going through the exact center of the square when in fact it has to just go through the square to the next one.
I have managed to implement a path optimization which removes nodes from the path when there is a straight path to a next point in the path, but this only partially resolved the problem, so the feeling I have now is that I am still using wrong representation for my navigation graph.
Looking at the way Unity does things, there is a way to represent navigation data using a mesh, but I haven't found any source code or even any more or less derailed description of its inner workings.
So which data structure/pathfinding algorithm is optimal for my case?

Calculate a dynamic iteration value when zooming into a Mandelbrot

I'm trying to figure out how to automatically adjust the maximum iteration value when moving around in the Mandelbrot fractal.
All examples I've found uses a constant of 1000 or less but that's not enough when zooming into the fractal set.
Is there a way to determine the number of max_iterations based on for example where you are in the Mandelbrot space (x_start,x_end,y_start,y_end)?
One method I tried was to repetitively pre-process a small area in the region of the Mset boundary with increasing iterations until the percentage change in status from one repetition to the next was small. The problem was, that would vary in different places on the current map, since the "depth" varies across it. How to find the right place to do it? By logging the "deepest" boundary area during the previous generation (that will still be within the next zoom area).
But my best strategy was to avoid iterating wherever possible:
Away from the boundary of the Mset, areas of equal depth can be "contoured" and then filled with that depth. It was not an easy algorithm. Basically I followed a raster scan but when I detected a boundary of iteration change (examining all the neighbours to ensure I wasn't close the the edge of the Mset), I would switch to a curve-stitching method to iterate around a contour back to where it started (obviously not recalculating spots I already did), and then make a second pass filling in the raster lines within the countour with the iteration level. It was fraught with leaks but eventually I cracked it.
Within the Mset, I followed the same approach, because the very last thing you want to do is to plough across vast areas and hit the iteration limit.
The difficult area is close the the boundary, where the iteration results can't be related to smooth contours with the neighbours. The contour stitching method won't work here, since there is only ever 1 pixel of a particular depth.
Using the contour method also will have faults to the lower or Mset sides of this region, but since this area looks chaotic until you zoom deeper, I lived with that.
So having said all that, I simply set the iteration depth as high as I can tolerate, but perhaps you can combine my first paragraph with the area-filling techniques.
BTW colouring the region adjacent to the Mset looks terrible when an animated smooth playback of the zoom is attempted. For that reason I coloured this area in a grey scale, by comparing with neighbours. If there was too much difference, I coloured to 0x808080 at first, then adapted that depending on the predominance of the neighbours' depth. All requiring fine tuning!

Mapping a 2D space of 3D volumes to 1D space (a file)

I have a two dimensional array of three dimensional volumes. This two dimensional array represents a top-down view of all of the three dimensional volumes.
I want to save this data to a file in such a way that I can retrieve it very quickly. My problem is that the two dimensional array may change size and shape; it's not always nice and square. This tends to leave unused sections and quite a lot of them.
My method of retrieval is currently using the volume-level 2D view to locate volumes that need to be loaded, but I am having difficulty coming up with a good data structure and storage technique. Most of the methods I have come across require the 2D view to be of the same length and width or depend on the length or the width.
It may be worthy to note that I want to avoid unused space in the file and that I also want good locality for the mapped points. When mapping points, it is pretty common to come up with a solution which works but produces odd relationships; {0, 0} must not map to anything but {0} and {1, 0} should be pretty close to {0} and not something like {34}.
How would you go about doing this in a space and time efficient manner?
I solved this a while back by implementing a few different space filling curves and using them to map and transform the upper dimensional data to the single dimension file. I found that a Hilbert curve worked perfectly.
So you are talking about just saving 2d slices of your model space right?
To be honest, I think the simplest and probably the best thing to do is just save everything. It makes things very simple, and you can seek very easily to a specific spot in the file too.
Then, compress your file stream using zlib, or bz2, etc. If you have a lot of zeros, it will compress very well. When I start to do this, it sped up my HPC code quite a bit.
I can think of several more complicated things to do, but what are you really trying to achieve? The compression will make it small, and it is nice to have a simple format.

Rendering massive amount of data

I have a 3D floating-point matrix, in worst-case scenario the size could be (200000x1000000x100), I want to visualize this matrix using Qt/OpenGL.
Since the number of elements is extremely high, I want to render them in a way that when the camera is far away from the matrix, I just show a number of interesting points that gives an approximation of how the matrix look like. When the camera gets closer, I want to get more details and hence more elements are calculated.
I would like to know if there are techniques that deals with this kind of visualization.
The general idea is called level-of-detail rendering and is a whole science in itself.
For your domain i would recommend two steps:
1) Reduce the number of cells by averaging (arithmetic-mean function) them in cubes of different sizes and caching those cubes (on disk as well as RAM). "Different" means here, that you have the same data in multiple sizes of cubes, e.g. coarse-grained cubes of 10000x10000x10000 and finer cubes of 100x100x100 cells resulting in multiple levels-of-detail. You have to organize these in a hierarchical structure (the larger ones containing multiple smaller ones) and for this i would recommend an Octree:
http://en.wikipedia.org/wiki/Octree
2) The second step is to actually render parts of this Octree:
To do this use the distance of your camera-point to the sub-cubes. Go through the cubes and decide to either enter the sub-cube or render the larger cube by using this distance-function and heuristically chosen or guessed threshold-values.
(2) can be further optimized but this is optional: To optimize this rendering organize the to-be-rendered cube's into layers: The direction of the layers (whether it is in x, y, or z-slices) depends on your camera-viewpoint to which it should be near-perpendicular. Then render each slice into a texture and voila you only have to render a single quad with that texture for each slice, 1000 quads are no problem to render.
Qt has some way of rendering huge number of elements efficiently. Check the examples/demo that is part of QT.

Resources