Internally Flash obviously keeps a list of the primitives drawn using Graphics so I wondered if you have many such primitives in a Sprite, can you re-position/remove/alter individual items rather than clear and re-draw everything? Or is this deeper into the bowels of Flash than you're allowed (or recommended) to go?
Drawing primitives aren't accessible to user code once they've been committed to the graphics context, but if you need fast drawing objects you should use shapes instead of sprites. Sprites are containers that can contain other sprites and graphics contexts, Shapes are objects with only graphics contexts and non interactive.
Sprite -> DisplayObjectContainer - > InteractiveObject -> DisplayObject
Shape -> DisplayObject
Unfortunately, it is impossible: Once the items are drawn, you can only modify the full shape, but not the drawing itself.
To give you more of an explanation, I googled about how Flash actually calculates display objects. Unfortunately, I couldn't find anything specific.
But I found enough to make an educated guess: [EDIT]: I found a very interesting PDF on the Anatomy of a Flash. It explains the rendering tree and how graphics objects are treated internally.
I know for a fact that all shape tweens created in the IDE are compiled into shape sequences (each frame is stored as a separate image). And it makes sense to do it that way: Each new frame of the movie must be calculated, all vector images are added to a tree, each rendered as bitmaps, combined and drawn as one final bit plane, in order to be displayed. So it is only logical to do every possible shape calculation at compile time, rather than at runtime.
Then again, a bitmap would store 32 bits of color information for every single pixel, while vectors are stored in simple values, storing x and y coordinates, line style, fill style, etc. Some vectors can be grouped, so that for more complex shapes, line and fill styles only have to be stored once, and only coordinates are necessary for the rest. Also, primitive shapes like circles and rectangles require less information than objects combined from many individual points and lines.
[EDIT]: The above mentioned PDF says this:
Both AS3 and AS3 DisplayObjects are
converted to SObjects internally.
SObjects have a character associated.
Based on the character type it has
different drawing methods, but it all
resumes to drawing fills with
different source of colors.
It would take a very, very complex vector shape to require more single pieces of information than its bitmap representation, provided it is larger than a few pixels in width and height. Therefore, keeping simple shapes as vector representations consumes considerably less memory than storing full bitmaps - and so it is logical not to do shape rendering at compile time, as well (except for complicated shapes - then the "cacheAsBitmap" property comes into play).
Consider what I've said about vectors, line style and fill style, etc. - sounds quite a lot like the sequence of commands we have to write when drawing in ActionScript, right? I would assume these commands are simply converted 1:1 into exactly the kind of vector representations I was talking about. This would make the compiler faster, the binaries smaller, and the handling of both the IDE shapes and the AS shapes exactly the same.
[EDIT]: Turns out I was not quite right on that:
Edge & Colors
LSObjects tree is traversed and a list of edges is created
Edges have colors associated
Strokes are converted to edges
Colors are sources of display data, eg. Bitmaps, Video, Solid fills,
Gradients
Rasterization
Edges are sorted and a color is calculated for each pixel – pixels are
touched only once
Presentation
After the main rasterizer is done painting, the memory buffer is
copied to the screen
Now imagine all of those vectors were freely editable:
The sequence of commands would no longer be final! What if you were to add, move or erase one at runtime? For example: Having a rectangle inside of a filled rectangle subtracts the inner shape from the outer shape. What if you moved one of the corner points to the outside? The result would be a completely different shape! Or if you added one point? You could not store the shape as a rectangle any longer, requiring 5 point items to draw the same thing that once had been one rect item. In short: All the groupings and memory optimizations would no longer work. And it would also slow down runtime graphics considerably. That's why it is only allowed to add new elements to the shape, but not to modify them once they are drawn. And why you have to clear and redraw your graphics, if you want existing shapes to change.
[EDIT]: You can always do complex stuff by doing the calculations yourself. I still believe it was a good decision not to integrate those into basic graphics functionality.
With Flash CS5, and the XFL file format, this data is now accessible as XML.
For my example, you could make a tile map composed of 'Graphic' items from a MovieClip with various frames being various tiles. Instantly you come to the problem of needing to access those inaccessible frame indexes from 'Shape' objects.
If you put them into a symbol (even one that is not exported), you can find it in a file in your LIBRARY folder (after saving as 'xfl'). It mirrors the Library contents.
<DOMSymbolItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://ns.adobe.com/xfl/2008/" name="Tileset_Level_Test" itemID="4e00fe7f-00000450" linkageExportForAS="true" linkageClassName="Tileset_Level_Test" sourceLibraryItemHRef="Symbol 1" lastModified="1308719656" lastUniqueIdentifier="3">
<timeline>
<DOMTimeline name="Tileset_Level_Test">
<layers>
<DOMLayer name="Layer 1" color="#4FFF4F" current="true" isSelected="true" autoNamed="false">
<frames>
<DOMFrame index="0" keyMode="9728">
<elements>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="8" loop="play once">
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="4" loop="play once">
<matrix>
<Matrix tx="48"/>
</matrix>
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
... lots more...
</elements>
</DOMFrame>
</frames>
</DOMLayer>
</layers>
</DOMTimeline>
</timeline>
</DOMSymbolItem>
The XML looks quite complex, but you can process it down to something much simpler with the XML class, and (for instance) construct a collision mask from a MovieClip mirroring those frame indexes, and identify spawn points and other special classes of things. Or you might process the data and draw the whole map yourself, having only needed a way to build it visually. All you might really care about is tx,ty attributes in the Matrix (for where a tile is placed), and 'firstFrame' attribute in the 'DOMSymbolInstance' (for which tile).
Anyways, you could preprocess it with an AIR applet to make just the data you want, and then either poop out a .as file to include in the project, or simplified XML, or whatever you like. Or use whatever other tools/languages you prefer, and add that processing step to your build scripting.
The xfl file format is also handy for tracking down and fixing all manner of things which Flash is too broken/buggy/AFU to fix, such as leftover font references in obscure parts of parts of parts.... You can either fix them in the library, or literally delete the file of the offending part, or edit the XML by hand. Grep and sed and find and xargs are all your friends for these tasks. Especially for things like snapping all coordinates to integer values, or proper cell boundaries, since all of Flash 'snapping' is horribly broken, too. Piping XML files through sed can be quite hazardous to files that you have not backed up, but quite rewarding for evil people who know what they're up to, and use version control.
Well every DisplayObject has only one graphic reference. So if you want to move (or scale etc.) several graphic objects in one Sprite, I suggest you use the display tree as it was intended.
Just add several children (Sprites or MovieClips or ...) in one Sprite each being redrawn when necessary.
Related
There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?
I am developing a 2D space RTS game which involves spaceships navigating over large maps (think Homeworld but in 2D), which mostly consist of open space, but occasionally containing asteroid fields, space structures and other unwalkable terrain elements.
The problem I have is that using a grid-based solution with a good enough precision results in a very large amount of nodes, so pathfinding takes a long time, and considering that maps contain a lot of open space, huge walkable sections of map are represented by large amount of walkable nodes.
I have tried switching to a quad-tree representation of the map to reduce the amount of nodes in a navigation graph, but this approach yields wierd paths that involve ships going through the exact center of the square when in fact it has to just go through the square to the next one.
I have managed to implement a path optimization which removes nodes from the path when there is a straight path to a next point in the path, but this only partially resolved the problem, so the feeling I have now is that I am still using wrong representation for my navigation graph.
Looking at the way Unity does things, there is a way to represent navigation data using a mesh, but I haven't found any source code or even any more or less derailed description of its inner workings.
So which data structure/pathfinding algorithm is optimal for my case?
I'm looking to auto-generate a UML class model in virtual reality using A-Frame.io (or another technology) by passing in values. Has anyone ever done something similar in the past? Not sure where to start.
Thanks
You might want to look into plantuml which is a nice UML generator. Most of it's diagrams are generated as input to graphviz's dot. Dot is a layout engine - it takes a list of nodes and connections and puts them into 2D space and then renders them to one of it's output formats - or just returns the graph, but this time with coordinates on where to draw what. You could meddle with this data and render the elements with volume but on a 2D plane with dot generated coordinates. Perhaps even you could modify it to place them in 3D space instead of a plane.
Or you could just render the plantuml output on a 2D plane, place it in 3D space and it would probably be good enough. There are even online generators for plantuml.
Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html
I have a two dimensional array of three dimensional volumes. This two dimensional array represents a top-down view of all of the three dimensional volumes.
I want to save this data to a file in such a way that I can retrieve it very quickly. My problem is that the two dimensional array may change size and shape; it's not always nice and square. This tends to leave unused sections and quite a lot of them.
My method of retrieval is currently using the volume-level 2D view to locate volumes that need to be loaded, but I am having difficulty coming up with a good data structure and storage technique. Most of the methods I have come across require the 2D view to be of the same length and width or depend on the length or the width.
It may be worthy to note that I want to avoid unused space in the file and that I also want good locality for the mapped points. When mapping points, it is pretty common to come up with a solution which works but produces odd relationships; {0, 0} must not map to anything but {0} and {1, 0} should be pretty close to {0} and not something like {34}.
How would you go about doing this in a space and time efficient manner?
I solved this a while back by implementing a few different space filling curves and using them to map and transform the upper dimensional data to the single dimension file. I found that a Hilbert curve worked perfectly.
So you are talking about just saving 2d slices of your model space right?
To be honest, I think the simplest and probably the best thing to do is just save everything. It makes things very simple, and you can seek very easily to a specific spot in the file too.
Then, compress your file stream using zlib, or bz2, etc. If you have a lot of zeros, it will compress very well. When I start to do this, it sped up my HPC code quite a bit.
I can think of several more complicated things to do, but what are you really trying to achieve? The compression will make it small, and it is nice to have a simple format.