How would I implement the flood fill algorithm in GameMaker? - game-maker

So, I have implemented cellular automata into one of my games in GameMaker. But there are tons of disconnected caves... So does anyone know how to implement an algorithm like Flood Fill?

Have your automata check their immediate neighbours (the four pixels above, below, left and right). If the checked neighbour is the colour they're targeting, spawn an automaton there.

Related

Alloy model - how to display cycle in visualization of graph?

We have an Alloy model intended to find possible deadlocks in a system. It is set up such that when a counterexample is found, that implies the system being modeling may have a deadlock, i.e. a circular dependency between nodes in the graph. The problem is the visual model of the graph is so complicated, it's nearly impossible to find the cycle representing the deadlock. If there were a way to highlight the cycle, or at least perhaps highlight arcs in the graph that are directed "up" rather than "down" this would help us visualize things better (since in the model we have, a deadlock-free system has all arcs directed in a downward direction). Is there a way to highlight or selectively plot nodes and arcs that create the counterexample?
The first thing that comes to my mind is that when Alloy shows an instance of a predicate, the various arguments to the predicate can be styled specially. So you might try (1) defining a predicate that is the inverse of your assertion, i.e. one that holds when a deadlock is present and which assigns a named role to the nodes in the cycle, and (2) setting the style to display those nodes in a distinct color or shape. You may be able to hide everything that's not in the cycle, or gray it out.

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

How to achieve realistic reflection with threejs

I am trying to render as realistically as possible a scene in which a point light hits an object and bounces off with the same angle wrt the normal of the face (angle of incidence = angle of reflection) and illuminates the scene elsewhere.
Now, I know reflection in threejs is normally dealt with CubeCamera-material as per the examples I found online, but it doesn't quite apply to my case, for I may be observing the scene from a point in which I might not be able to observe the reflection of the object on the mirror-like surface of another one.
Consider this example prototype I'm working on: if the box that is protruding from the wall in the scene had a mirror-like material (accomplished with a CubeCamera), I wouldn't be able to see the green cube's reflection on the bottom face unless the camera was at a specific position; in real life, however, if an object illuminated by a light source passes in the vicinity of another one, it will in part light it as if it were a light source itself (depending on the object's index of reflectivity, of course) and such phenomenon should be visible from any point of view the object receiving indirect lighting is visible from.
Hence I came up with the idea of adding a PointLight to the cube, but this of course produces undesirable effects on the surroundings.
I will try to illustrate my goal with the following sequence:
1) Here, the far side of what I will henceforth refer to as balcony is correctly dark, while the areas marked with a red 'x' are the consequence of the cube having a child PointLight which shines in all directions.
2) Here, the balcony's far face is still dark and the bottom one is receiving even more light as the cube passes by, which is desirable, but the wall behind the cube should actually be dark (I haven't added shadows yet, I first want to get the lighting right), as well as the ground beneath it and the lamp post.
3) Finally, when the cube has passed the balcony, it's just plain wrong for the balcony's side and bottom face to be illuminated, for we all now that a reflected ray does not bounce back the way it came from. Same applies to the lamp post.
Now I realize that all the mistakes that occur are due to the fact that the cube emits light itself, what I'm hoping you can help me with is determining a way to produce physically accurate reflected rays.
I would like to avoid using ambient light or other hacks to simulate real-life scenarios and stick to physics as much as possible; I suspect what I want to achieve is very computationally heavy to render, let alone animate in a real-time use case, but that's not an issue for I'm merely trying to develop a proof-of-concept, not something that should necessarily perform fast.
From what I gather, I should probably be writing custom vertex and fragment shaders for the materials receiving indirect illumination, right? Unfortunately I wouldn't know where to begin, can anyone point me in the right direction? Cheers.
If you do not want to go to the Volumetric rendering then you have 3 options (I know of)
ray-tracing
you have to use ray-trace rendering (back ray-trace) to achieve this. This will also cover shadows,transparent materials,reflected illumination and much more if coded properly. Unless you want to do also precise atmospheric scattering then this is the way.
back raytracing is one (or 3) ray(s) per each screen pixel. It is much faster but not that precise.. (still precise enough)
raytracing is one ray per each 3D angular unit (steradian) of space per each light source. It is slow but precise (if ray density is high enough).
If the casted ray hits any obstacle then its color is changed (due to obstacle property) and new ray is casted as reflected light ray. If material is transparent then also refracted ray is casted ... Each hit or refraction affect light intensity so you stop when intensity is lower then some treshold or on some layer of recursion (limit max number of refractions per ray) to avoid infinite loops and you can manipulate performance/quality ...
standard polygon rendering
With this approach (I think you are using it right now) you have to improvise. The reflection and illumination effects can be done similar to shadowing techniques. For each surface you have to render the scene in reflected direction. The same can be done with shadows but then you just rendering to the light direction or use shadow map instead. If you have insane number of reflective surfaces then this approach is not the way also to achieve reflection of refraction you have to render recursively making it multiple rendering pass per polygon which is also insane.
cubemap
You can use cube map per each object. It is similar to bullet 2 but the insanity is done just once while generating cubemaps instead of per frame ... If you have too much objects then this is also not the way. You can use cube map only for objects with reflective surfaces to make it manageable. Also if the objects are moving then you have to re-generate cubemaps once in a while ...

Load balancing using the heat equation

I didn't think this was mathsy enough for mathoverflow, so I thought I would try here. Its for a programming problem though, so its sorta on topic.
I have a graph (as in the maths one), where I have a number of vertices, A,B,C.. each with a load "value", these are connected by some arbitrary topology (there is at least a spanning tree). The objective of the problem is to transfer load between each of the vertices, preferably using the minimal flow possible.
I wish to know the transfer values along each edge.
The solution to the problem I was thinking of was to treat it as a heat transfer problem, and iteratively transfer load, or solve the heat equation in some way, counting the amount of load dissipated along each edge. The amount of heat transfer until the network reaches steady state should thus yield the result.
Whilst I think this will work, it seems like the stupid solution. I am wondering if there is a reference or sample problem that someone can point me to -- I am unsure what keywords to search for.
I could not see how to couple the problem as either a simplex problem or as a network flow problem -- each edge has unlimited capacity, and so does each node. There are two simultaneous minimisation problems to solve, so simplex seems to not apply??
If you have a spanning tree, then there is an O(n) solution.
Calculate the average value. (In the end, every node will have this value.) Then iterate over the leaves: calculate the change in value necessary for that leaf (average - value), add that change to the leaf, assign it (as flow) to the edge that connects that leaf to the tree, and subtract it from the other node. Remove that leaf and edge from the tree (not from the graph of course); the other node to may become a new leaf, if it has only one remaining edge in the tree.
When you reach the last node, if you've done the arithmetic right it will end up with the average value, just like all the other nodes.

Wifi Triangulation

What would be the best way to triangulate a wireless network passively. Are there tools available? Algorithms? Libraries?
My goal would be to create a relative map of various objects that sends or receive signals using signal strength (DB's), signal/noise ratio, signal phase, etc. from a few location points. With enough sampling, i'm guessing it would be possible to create a good 2d/3d map.
I'm searching for stuff in any language / platform.
Some keywords: wi-fi site survey, visualization, coverage, location, positioning
Thinking about using kismet to gather the data and then process it. Maybe Free Space Path Loss for RF in the 2.4Ghz range to calculate a relative distance. And optionally try to use RF obstacle attenuation estimation values (based on some user input) to give better estimates. Then use trilateration to generate possible relative coordinates.
You can't use the GPS technique because the timing is nothing like accurate enough.
The best you can do is Trilateration based on the signal strength from each base station and assume that range is proportional to signal.
You will probably need to force a connection to each base station in turn in order to measure the signal strength.
Interesting question. Initial thoughts were using output from something like the WiSpy spectrum analyzer. I like the idea of using a directional antenna. Looks like some research (may) be underway.
Instead of trilateration you could use bilinear interpolation. This is said to be better for non-linear distance vs. signal strength data like wifi in an urban environment would be. http://courses.cit.cornell.edu/ee476/FinalProjects/s2007/ayl26_ym82/ayl26_ym82/index.htm has the background math and the what I assume is AVR C for doing it with magnetic field sensors.
Using signal strength to judge distance could easily be thrown off by differences in materials blocking line-of-sight to each of the sampling points. It would probably be better to do the sampling with a directional antenna, and from each sampling point, find the bearing that maximizes signal strength to each device you want to locate. With this technique, you can use only two or three sampling locations, depending on the accuracy with which you can estimate the bearings.
Ars Technica has an article about this, citing the Fraunhofer Institute and Skyhooks Wireless. This technology is built into every iPhone and iPad.
Actually I think you should try using an algorithm like the GPS one (wikipedia).. of course you can simplify it according to your need, for example:
you need to install on every item that should broadcast its position (the navigation signal) an application that actually does it
you should use a different channel for every single item to be sure not to generate collisions (it depends also on how much you broadcast the signal)
so if you place at least 4 broadcasters you can triangulate on every client to allow it to calculate its position. Naturally the bcasters should be as much similar as possible in response..
by the way these are just ideas..

Resources