I have a grid based game (platformer) where I've based everything on tiles. I have tiles that are solid and liquid. I'm trying to find of a good way to make water tiles simulate water in a rough way.
At the moment I have the current system:
When a water tile is added above another water tile, it adds 1 to the water tile below. The number indicates the pressure.
Here's how it looks like at the moment:
[0] <- This water tile has 0 in pressure.
[1] <- This water tile has 1 in pressure.
if I add another water tile next to the bottom one, it searches from left, right and above if there are any water tiles and inheritates the biggest pressure around it.
Example:
[0]
[1][1]
And here's a bigger example after adding few water tiles:
[0][0]
[1][1][1][1]
[2][2][2][2][2]
Then I make every water tile that has pressure that is equal or bigger than 1 try to move left/right if there's free space, then set pressure to 0 and check if it can inheritate pressure around itself from neighbor water tiles if there are any.
This system works very well, except for the case when water tiles are removed from the top.
If I remove the top water tiles from the last example:
[1][1][1][1]
[2][2][2][2][2]
Then we have the top row with pressure 1, it should have 0 there now and the bottom row should have 1.
Is there some smarter system I can implement to do this more proper?
The following are the restrictions:
Each tile can only check its neighbors tiles.
Tile can have any function defined.
Tile can have any variable to store data.
Can you guys come up with a better system that works better than mine?
The usual test case I do is:
[]
[] should become [][]
[]
[]
[] should become [][][]
[]
[][][] should become [][][][]
Assuming the game runs for a while.
Any suggestions would be more than welcome!
Maybe you could forget about pressures (since you are not really using them as pressures) and simply use a bool tryMove or something like that.
Each simulation step should be broken in two substeps:
First loop in tiles:
If space below is free, set tryMove to true, finish this tile.
If has tile above, set tryMove to true else set tryMove to false.
If any neighbor is trying to move, set tryMove to true.
Second loop in tiles:
If trying to move and space below is free, move tile down, set
tryMove to false, finish tile.
If trying to move and can move sideways (free space to left or
right) set neighbors tryMove as false, move it, set this tryMove as false, finish tile.
I think this should fix your issue.
If you have the resources to support it, I recommend a recursive function that tries pushing tiles left and right from above. If the function finds another tile below, it can try displacing that tile to either side. The best method would be to start stepping left and right, one tile at a time. If it keeps finding water tiles, then it keeps going. Eventually it will either run out of tiles to check (ends up at a wall) or will find a water tile with free space to move over. After multiple iterations, the water should push outward and make room for the higher water to fall down.
Let me reclarify the left then right nature of the recursion. Essentially, once the function makes it to the base level of the water, it should start choosing water tiles, alternating left and right. So it will first check the tile directly left, then directly right, then two left, then two right, etc. If it finds an air tile, it should do a displacement of the nearest water tile. If it finds something else (usually a wall) it should give up on that side. Once it's given up on both sides, you should consider other behaviors instead, such as the topmost water randomly traveling around over the surface.
If you really want the behavior to seem natural, I highly recommend a random variable deciding whether it checks left or right first. Otherwise you will probably end up with strangely regular recurring patterns.
Related
There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?
I have a map full of markers corresponding to GPS coordinates, represented as a PostgreSQL + PostGIS database table using "geography" type for the GPS column.
Imagine, if you will, one semi-transparent square on top of each of these points corresponding to 1x1 mile, based from the centre, often intersecting with each other.
I'm trying to determine the minimum number of such "squares" and their GPS coordinates, so that they "cover" all of the markers with a minimum of 25 meters to the nearest border.
If it makes it any easier, the positions of these "squares" don't have to match the positions of any of the markers.
The purpose of this is to attempt to cut down the number of API requests to a "houses for sale" service significantly, since most of the positions are close to each other and the API takes a 1x1 mile square "bounding box" as the input for each call. It would be insanely wasteful to call the API many times for basically the same area when maybe 1 or 2 times would do it if I can first figure out where these imaginary "squares" go.
I get the feeling that this is considered a "known, common and solved" problem, but so far, I've not been able to figure out how to do it.
Sorry, but it seems like you have no idea what you are doing and are just being rude, both here and at PostGIS irc-channel.
You give no information about your api.
What is creating your maps?
Is it a wms-service or what?
What most people would do is setting up a mapservice with a tile cache. Then the mapservice will pich the tiles needed for each house you want to show ( or multiple houses).
The tiles will be prepared o will be created on the fly. But they will be cached for next time.
So, I think you should read up on things like
MapServer
Mapnik
MapCache
Mapproxy
GeoServer
That is not a complete list, but might give you some ideas about what it is that you want
This seems to be a rather asked question - (hear me out first! :)
I've created a polygon with perlin noise, and it looks like this:
I need to generate a texture from this array of points. (I'm using Monogame/XNA, but I assume this question is somewhat agnostic).
Anyway, researching this problem tells me that many people use raycasting to determine how many times a line crosses over the polygon shape (If once, it's inside. twice or zero times, it's outside). This makes sense, but I wonder if there is a better way, given that I have all of the points.
Doing a small raycast for every pixel I want to fill in seems excessive - is this the only/best way?
If I have a small 500px square image I need to fill in, I'll need to do a raycast for 250,000 individual pixels, which seems like an awful lot.
If you want to do this for every pixel, you can use a sweeping line:
Start from the topmost coordinate and examine a horizontal ray from left to right. Calculate all intersections with the polygon and sort them by their x-coordinate. Then iterate all pixels on the line and remember if you are in or out. Whenever you encounter an intersection, switch to the other side. If some pixel is in, set the texture. If not, ignore it. Do this from top to bottom for every possible horizontal line.
The intersection calculation could be enhanced in several ways. E.g. by using an acceleration data structure like a grid, quadtree, etc. or by examining the intersecting or touching edges of the polygon before. Then, when you sweep the line, you will already know, which edges will cause an intersection.
In raster scan when the beam reaches the right-hand side of the screen it undergoes a process known as horizontal flyback, in which its intensity is reduced and it is caused to "fly back" across the screen (the top-most mauve line). While the beam is flying back, it is also pulled a little way down the screen.
Because of the inductive inertia of the magnetic coils which deflect the electron beam, there is a delay named horizontal/vertical retrace time.
Why the beam does not scan even lines from right to left (like below image)?
Horizontal retrace(flyback) time can reduced. There is no need to deflect beam horizontally at end of each line. just a tiny vertical deflect needed.
The one-way scan principle directly mirrors the way magnetics in deflection circuitry work: first there is slow current ramp along with ray moving left-to-right while the deflection/flyback transformer is connected to the constant voltage supply, then "flyback" stage with higher, opposite-polarity voltage applied to the deflection coil, made by disconnecting flyback deflection transformer from constant voltage supply.
Both-ways deflection would need push-pull configuration of deflection drivers, and that probably was considered too complicated and too costly during the early days of TV standards emerging.
Another reason could be the fact that there would be problematic to match the position of picture on neighbouring scanlines -- unlike the single-way deflection circuitry, that goes exactly the same every scanline.
To expand on lvd's comment about matching neighbouring lines, the diagrams you have are a simplification; on a real classic CRT nothing is horizontal. A real complete field looks more like:
There's no coupling between horizontal and vertical motion. Each acts entirely independently. Left-to-right deflection is doing one thing while top-to-bottom does another. There is no communication between the two. Don't think of it like a line printer.
The image formed on the screen is solid because each scan has a certain height to it. So the rows join up. If left-to-right scanning were left-to-right-to-left-to-right as proposed then either:
* the image wouldn't be solid. It'd be a zig zag of alternating diagonal lines that touched at the edges. Perceptually, it'd appear to be a lot darker in the middle; or
* the image would be very heavily overdrawn. Still darker in the middle, and also indistinct at the edges.
I drew this myself, so be forgiving; the degree of diagonal slant is identical between the three options:
I've seen many mandelbrot image generator drawing a low resolution fractal of the mandelbrot and then continuously improve the fractal. Is this a tiling algorithm? Here is an example: http://neave.com/fractal/
Update: I've found this about recursively subdivide and calculate the mandelbrot: http://www.metabit.org/~rfigura/figura-fractal/math.html. Maybe it's possible to use a kd-tree to subdivide the image?
Update 2: http://randomascii.wordpress.com/2011/08/13/faster-fractals-through-algebra/
Update 3: http://www.fractalforums.com/programming/mandelbrot-exterior-optimization/15/
Author of Fractal eXtreme and the randomascii blog post linked in the question here.
Fractal eXtreme does a few things to give a gradually improving fractal image:
Start from the middle, not from the top. This is a trivial change that many early fractal programs ignored. The center should be the area the user cares the most about. This can either be starting with a center line, or spiraling out. Spiraling out has more overhead so I only use it on computationally intense images.
Do an initial low-res pass with 8x8 blocks (calculating one pixel out of 64). This gives a coarse initial view that is gradually refined at 4x4, 2x2, then 1x1 resolutions. Note that each pass does three times as many pixels as all previous passes -- don't recalculate the original points. Subsequent passes also start at the center, because that is more important.
A multi-pass method lends itself well to guessing. If four pixels in two rows have the same value then the pixels in-between probably have the same value, so don't calculate them. This works extremely well on some images. A cleanup pass at the end to look for pixels that were miscalculated is necessary and usually finds a few errors, but I've never seen visible errors after the cleanup pass, and this can give a 10x+ speedup. This feature can be disabled. The success of this feature (guess percentage) can be viewed in the status window.
When zooming in (double-click to double the magnification) the previously calculated pixels can be used as a starting point so that only three quarters of the pixels need calculating. This doesn't work when the required precision increases but these discontinuities are rare.
More sophisticated algorithms are definitely possible. Curve following, for instances.
Having fast math also helps. The high-precision routines in FX are fully unwound assembly language (generated by C# code) that uses 64-bit multiplies.
FX also has a couple of checks for points within the two biggest bulbs, to avoid calculating them at all. It also watches for cycles in calculations -- if the exact same point shows up then the calculations will repeat.
To see this in action visit http://www.cygnus-software.com/
I think that site is not as clever as you give it credit for. I think what happens on a zoom is this:
Take the previous image, scale it up using a standard interpolation method. This gives you the 'blurry' zoomed in image. Click the zoom in button several times to see this best
Then, in concentric circles starting from the central point, recalculate squares of the image in full resolution for the new zoom level. This 'sharpens' the image progressively from the centre outwards. Because you're probably looking at the centre, you see the improvement straight away.
You can more clearly see what it's doing by zooming far in, then dragging the image in a diagonal direction, so that almost all the screen is undrawn. When you release the drag, you will see the image rendered progressively in squares, in concentric circles from the new centre.
I haven't checked, but I don't think it's doing anything clever to treat in-set points differently - it's just that because an entirely-in-set square will be black both before and after rerendering, you can't see a difference.
The oldschool Mandelbrot rendering algorithm is the one that begins calculating pixels at the top-left position, goes right until it reaches the end of the screen then moves to the beginning of next line, like an ordinary typewriter machine (visually).
The linked algorithm is just calculating pixels in a different order, and when it calculates one, it quickly makes assumption about certain neighboring pixels and later goes back to properly redraw them. That's when you see improvement, think of it as displaying a progressive JPEG. If you zoom into the set, certain pixel values will remain the same (they don't need to be recalculated) the interim pixels will be guessed, quickly drawn and later recalculated.
A continuously improving Mandelbrot is just for your eyes, it will never finish earlier than a properly calculating per-pixel algorithm which can detect "islands".