Mapping a 2D space of 3D volumes to 1D space (a file) - math

I have a two dimensional array of three dimensional volumes. This two dimensional array represents a top-down view of all of the three dimensional volumes.
I want to save this data to a file in such a way that I can retrieve it very quickly. My problem is that the two dimensional array may change size and shape; it's not always nice and square. This tends to leave unused sections and quite a lot of them.
My method of retrieval is currently using the volume-level 2D view to locate volumes that need to be loaded, but I am having difficulty coming up with a good data structure and storage technique. Most of the methods I have come across require the 2D view to be of the same length and width or depend on the length or the width.
It may be worthy to note that I want to avoid unused space in the file and that I also want good locality for the mapped points. When mapping points, it is pretty common to come up with a solution which works but produces odd relationships; {0, 0} must not map to anything but {0} and {1, 0} should be pretty close to {0} and not something like {34}.
How would you go about doing this in a space and time efficient manner?

I solved this a while back by implementing a few different space filling curves and using them to map and transform the upper dimensional data to the single dimension file. I found that a Hilbert curve worked perfectly.

So you are talking about just saving 2d slices of your model space right?
To be honest, I think the simplest and probably the best thing to do is just save everything. It makes things very simple, and you can seek very easily to a specific spot in the file too.
Then, compress your file stream using zlib, or bz2, etc. If you have a lot of zeros, it will compress very well. When I start to do this, it sped up my HPC code quite a bit.
I can think of several more complicated things to do, but what are you really trying to achieve? The compression will make it small, and it is nice to have a simple format.

Related

Pathfinding on large, mostly walkable, non-grid based maps

I am developing a 2D space RTS game which involves spaceships navigating over large maps (think Homeworld but in 2D), which mostly consist of open space, but occasionally containing asteroid fields, space structures and other unwalkable terrain elements.
The problem I have is that using a grid-based solution with a good enough precision results in a very large amount of nodes, so pathfinding takes a long time, and considering that maps contain a lot of open space, huge walkable sections of map are represented by large amount of walkable nodes.
I have tried switching to a quad-tree representation of the map to reduce the amount of nodes in a navigation graph, but this approach yields wierd paths that involve ships going through the exact center of the square when in fact it has to just go through the square to the next one.
I have managed to implement a path optimization which removes nodes from the path when there is a straight path to a next point in the path, but this only partially resolved the problem, so the feeling I have now is that I am still using wrong representation for my navigation graph.
Looking at the way Unity does things, there is a way to represent navigation data using a mesh, but I haven't found any source code or even any more or less derailed description of its inner workings.
So which data structure/pathfinding algorithm is optimal for my case?

Infinite world generator

I want to make an infinite 2-D world generator. I thought I would make a chunk class that represents some part of the world. I only don't know where to store the chunks, and how to generate new ones.
I thought I could store chunks in vector and remember their X,Y. For the player, I make 3x3 array (where he stands in the center) with pointers to the chunks to form the vector. When he moves up, for example, I move upper and middle row lower, and load new chunks from the vector. I dont know if this is a good idea, It was the first thing that I thought up.
Anyway, I don't have any idea how to generate chunks, so they match each other (no desert near water, aso). Even generating constant dimension map is quite hard for me (I really need infinite world).
Some time ago I generated constant dimension world using flood method, so I filled whole map with grass at the beginning, and then made random spots of water, trees and other ones, although I dont think that is usable in infinite world case.
These problems have been addressed in implementations of Conway's Life.
One way to achieve an unbounded world and only deal with the portions that are different from each other is described in discussions of the hashlife algorithm and data structures. It also deals with high performance within a chunk and good performance across chunks. Chunks that are the same are represented by one description that many chunks can point to.
http://www.drdobbs.com/jvm/an-algorithm-for-compressing-space-and-t/184406478
http://en.wikipedia.org/wiki/Hashlife
http://golly.sourceforge.net/
http://tomas.rokicki.com/hlife/
http://www-users.cs.york.ac.uk/~jowen/hashlife.html
This probably isn't the right way to do this, but may give you some ideas.
what you could do is store a 3x3 2D array of chunks or even a 5x5 array of chunks into memory. Updating the these chunks and which chunks are loaded depending on the player's position. The rest of the world can be stored within a file, which is read from and written to. When a player moves within 2-3 chunks of a un-initiated chunk, or empty chunk, generate that chunk using whatever method you want.
The tricky part of this is if you have rivers or bodies of water, forests, or any other type of landscape that spans across multiple chunks, you'll have to have a separate algorithm to handle that when generating new chunks. Will probably need to measure the intersection between two lines each time you generate. The line created by the intersection of land and water, or plains and forests, and the line that represents the edge of the chunk. Once you have that point, you can probably figure out which side of that point on the chunk needs land/water, and randomly generate it from there.
This is what I see happening in Minecraft anyways.

Rendering massive amount of data

I have a 3D floating-point matrix, in worst-case scenario the size could be (200000x1000000x100), I want to visualize this matrix using Qt/OpenGL.
Since the number of elements is extremely high, I want to render them in a way that when the camera is far away from the matrix, I just show a number of interesting points that gives an approximation of how the matrix look like. When the camera gets closer, I want to get more details and hence more elements are calculated.
I would like to know if there are techniques that deals with this kind of visualization.
The general idea is called level-of-detail rendering and is a whole science in itself.
For your domain i would recommend two steps:
1) Reduce the number of cells by averaging (arithmetic-mean function) them in cubes of different sizes and caching those cubes (on disk as well as RAM). "Different" means here, that you have the same data in multiple sizes of cubes, e.g. coarse-grained cubes of 10000x10000x10000 and finer cubes of 100x100x100 cells resulting in multiple levels-of-detail. You have to organize these in a hierarchical structure (the larger ones containing multiple smaller ones) and for this i would recommend an Octree:
http://en.wikipedia.org/wiki/Octree
2) The second step is to actually render parts of this Octree:
To do this use the distance of your camera-point to the sub-cubes. Go through the cubes and decide to either enter the sub-cube or render the larger cube by using this distance-function and heuristically chosen or guessed threshold-values.
(2) can be further optimized but this is optional: To optimize this rendering organize the to-be-rendered cube's into layers: The direction of the layers (whether it is in x, y, or z-slices) depends on your camera-viewpoint to which it should be near-perpendicular. Then render each slice into a texture and voila you only have to render a single quad with that texture for each slice, 1000 quads are no problem to render.
Qt has some way of rendering huge number of elements efficiently. Check the examples/demo that is part of QT.

Do objects drawn by Flash Graphics class exist as objects?

Internally Flash obviously keeps a list of the primitives drawn using Graphics so I wondered if you have many such primitives in a Sprite, can you re-position/remove/alter individual items rather than clear and re-draw everything? Or is this deeper into the bowels of Flash than you're allowed (or recommended) to go?
Drawing primitives aren't accessible to user code once they've been committed to the graphics context, but if you need fast drawing objects you should use shapes instead of sprites. Sprites are containers that can contain other sprites and graphics contexts, Shapes are objects with only graphics contexts and non interactive.
Sprite -> DisplayObjectContainer - > InteractiveObject -> DisplayObject
Shape -> DisplayObject
Unfortunately, it is impossible: Once the items are drawn, you can only modify the full shape, but not the drawing itself.
To give you more of an explanation, I googled about how Flash actually calculates display objects. Unfortunately, I couldn't find anything specific.
But I found enough to make an educated guess: [EDIT]: I found a very interesting PDF on the Anatomy of a Flash. It explains the rendering tree and how graphics objects are treated internally.
I know for a fact that all shape tweens created in the IDE are compiled into shape sequences (each frame is stored as a separate image). And it makes sense to do it that way: Each new frame of the movie must be calculated, all vector images are added to a tree, each rendered as bitmaps, combined and drawn as one final bit plane, in order to be displayed. So it is only logical to do every possible shape calculation at compile time, rather than at runtime.
Then again, a bitmap would store 32 bits of color information for every single pixel, while vectors are stored in simple values, storing x and y coordinates, line style, fill style, etc. Some vectors can be grouped, so that for more complex shapes, line and fill styles only have to be stored once, and only coordinates are necessary for the rest. Also, primitive shapes like circles and rectangles require less information than objects combined from many individual points and lines.
[EDIT]: The above mentioned PDF says this:
Both AS3 and AS3 DisplayObjects are
converted to SObjects internally.
SObjects have a character associated.
Based on the character type it has
different drawing methods, but it all
resumes to drawing fills with
different source of colors.
It would take a very, very complex vector shape to require more single pieces of information than its bitmap representation, provided it is larger than a few pixels in width and height. Therefore, keeping simple shapes as vector representations consumes considerably less memory than storing full bitmaps - and so it is logical not to do shape rendering at compile time, as well (except for complicated shapes - then the "cacheAsBitmap" property comes into play).
Consider what I've said about vectors, line style and fill style, etc. - sounds quite a lot like the sequence of commands we have to write when drawing in ActionScript, right? I would assume these commands are simply converted 1:1 into exactly the kind of vector representations I was talking about. This would make the compiler faster, the binaries smaller, and the handling of both the IDE shapes and the AS shapes exactly the same.
[EDIT]: Turns out I was not quite right on that:
Edge & Colors
LSObjects tree is traversed and a list of edges is created
Edges have colors associated
Strokes are converted to edges
Colors are sources of display data, eg. Bitmaps, Video, Solid fills,
Gradients
Rasterization
Edges are sorted and a color is calculated for each pixel – pixels are
touched only once
Presentation
After the main rasterizer is done painting, the memory buffer is
copied to the screen
Now imagine all of those vectors were freely editable:
The sequence of commands would no longer be final! What if you were to add, move or erase one at runtime? For example: Having a rectangle inside of a filled rectangle subtracts the inner shape from the outer shape. What if you moved one of the corner points to the outside? The result would be a completely different shape! Or if you added one point? You could not store the shape as a rectangle any longer, requiring 5 point items to draw the same thing that once had been one rect item. In short: All the groupings and memory optimizations would no longer work. And it would also slow down runtime graphics considerably. That's why it is only allowed to add new elements to the shape, but not to modify them once they are drawn. And why you have to clear and redraw your graphics, if you want existing shapes to change.
[EDIT]: You can always do complex stuff by doing the calculations yourself. I still believe it was a good decision not to integrate those into basic graphics functionality.
With Flash CS5, and the XFL file format, this data is now accessible as XML.
For my example, you could make a tile map composed of 'Graphic' items from a MovieClip with various frames being various tiles. Instantly you come to the problem of needing to access those inaccessible frame indexes from 'Shape' objects.
If you put them into a symbol (even one that is not exported), you can find it in a file in your LIBRARY folder (after saving as 'xfl'). It mirrors the Library contents.
<DOMSymbolItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://ns.adobe.com/xfl/2008/" name="Tileset_Level_Test" itemID="4e00fe7f-00000450" linkageExportForAS="true" linkageClassName="Tileset_Level_Test" sourceLibraryItemHRef="Symbol 1" lastModified="1308719656" lastUniqueIdentifier="3">
<timeline>
<DOMTimeline name="Tileset_Level_Test">
<layers>
<DOMLayer name="Layer 1" color="#4FFF4F" current="true" isSelected="true" autoNamed="false">
<frames>
<DOMFrame index="0" keyMode="9728">
<elements>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="8" loop="play once">
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="4" loop="play once">
<matrix>
<Matrix tx="48"/>
</matrix>
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
... lots more...
</elements>
</DOMFrame>
</frames>
</DOMLayer>
</layers>
</DOMTimeline>
</timeline>
</DOMSymbolItem>
The XML looks quite complex, but you can process it down to something much simpler with the XML class, and (for instance) construct a collision mask from a MovieClip mirroring those frame indexes, and identify spawn points and other special classes of things. Or you might process the data and draw the whole map yourself, having only needed a way to build it visually. All you might really care about is tx,ty attributes in the Matrix (for where a tile is placed), and 'firstFrame' attribute in the 'DOMSymbolInstance' (for which tile).
Anyways, you could preprocess it with an AIR applet to make just the data you want, and then either poop out a .as file to include in the project, or simplified XML, or whatever you like. Or use whatever other tools/languages you prefer, and add that processing step to your build scripting.
The xfl file format is also handy for tracking down and fixing all manner of things which Flash is too broken/buggy/AFU to fix, such as leftover font references in obscure parts of parts of parts.... You can either fix them in the library, or literally delete the file of the offending part, or edit the XML by hand. Grep and sed and find and xargs are all your friends for these tasks. Especially for things like snapping all coordinates to integer values, or proper cell boundaries, since all of Flash 'snapping' is horribly broken, too. Piping XML files through sed can be quite hazardous to files that you have not backed up, but quite rewarding for evil people who know what they're up to, and use version control.
Well every DisplayObject has only one graphic reference. So if you want to move (or scale etc.) several graphic objects in one Sprite, I suggest you use the display tree as it was intended.
Just add several children (Sprites or MovieClips or ...) in one Sprite each being redrawn when necessary.

Very general question about the implementation of vectors, vertices, edges, rays, lines & line segments

This is just a LARGE generalized question regarding rays (and/or line segments or edges etc) and their place in a software rendered 3d engine that is/not performing raytracing operations. I'm learning the basics and I'm the first to admit that I don't know much about this stuff so please be kind. :)
I wondered why a parameterized line is not used instead of a ray(or are they??). I have looked around at a few cpp files around the internet and seen a couple of resources define a Ray.cpp object, one with a vertex and a vector, another used a point and a vector. I'm pretty sure that you can define an infinate line with only a normal or a vector and then define intersecting points along that line to create a line segment as a subset of that infinate line. Are there any current engines implementing lines in this way, or is there a better way to go about this?
To add further complication (or simplicity?) Wikipedia says that in vector space, the end points of a line segment are often vectors, notably u -> u + v, which makes alot of sence if defining a line by vectors in space rather than intersecting an already defined, infinate line, but I cannot find any implementation of this either which makes me wonder about the validity of my thoughts when applying this in a 3d engine and even further complication is created when looking at the Flash 3D engine, Papervision, I looked at the Ray class and it takes 6 individual number values as it's parameters and then returns them as 2 different Number3D, (the Papervision equivalent of a Vector), data types?!?
I'd be very interested to see an implementation of something which actually uses the CORRECT way of implementing these low level parts as per their true definitions.
I'm pretty sure that you can define an infinate line with only a normal or a vector
No, you can't. A vector would define a direction of the line, but all the parallel lines share the same direction, so to pick one, you need to pin it down using a specific point that the line passes through.
Lines are typically defined in Origin + Direction*K form, where K would take any real value, because that form is easy for other math. You could as well use two points on the line.

Resources