Handle traffic in 2D city builder - 2d

I´m creating a 2D isometric city building simulation and today I have kind of a "best practice" question without asking for specific code.
As in all city building games you are able to place building, roads and so on. The player is able to place building everywhere, no matter if it´s connected to a road or not. In addition to that there is one building (call it center building) all the buildings need to be connected with (by road).
I need to handle that without doing too many calculations which, breaks the FPS.
Right now I have a timer job for each building which checks if one of the surrounded tiles of a building is a road. That works fine, also for a lot of buildings since the check is simple.
But now I would like to check the connection to the center building. To check that it is necessary (in my opinion) to use something like a pathfinder, which checks if one of the surrounded tiles has a road connection to one of the surrounded tiles of the center building.
I can not check that frequently because this completely smashes the FPS down to 30 or lower. My idea was to fire an event if a road has been built or destroyed to "recalulate" the road connection. But there comes another problem...the player might destoy a road in the middle of the map and the buildings are really far away from each other so I need to find the involved buildings which also might take too much time.
My last idea is creating something like a timer queue and work through these items gradually, but before I keep using the trial and error method I would like to ask you for ideas.
Really looking forward to your ideas!
Yheeky

You could have each building store a list of tiles (a path) that connects it to the center building. Then when a tile is randomly destroyed by the player, you can have each building tested to see if it pathed through that road tile or not.
Alternatively you could have each road tile store which buildings require it, so when the tile is destroyed you have the buildings immediately throw flags. This could get quite messy though, but lends itself decently to background speed cleanup calls.
Both methods are quite messy. Perhaps you can make it a rule that the player can't place roads except next to other roads or next to the center building. Then when the player deletes a road tile, do a paint-style fill that destroys all of the disconnected roads. You can also periodically do a random check on tiles to see if they are illegally placed or not, but that should be unnecessary if you're careful.

Related

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

Motion tracking goes way off

So I've been messing around with Project Tango, and noticed that if I turn on a motion tracking app, and leave the device on a table(blocking all cameras), the motion tracking goes off in crazy directions and makes incredibly wrong predictions on where I'm going (I'm not even moving, but the device thinks I'm going 10 meters to the right). I'm wondering if their is some exception that can be thrown or some warning or api call I can call to stop this from happening.
if you block all the camera, there is not features camera can capture.
so motion tracking may be in two stages:
1. No moving,
2. drifting to Hawaii.
either ways may happen.
If you did block the fisheye camera, yes, this is expected.
For API, There is a way to handle it.
Please check life cycle for motiontracking concept
For example for C/C++ :
https://developers.google.com/project-tango/apis/c/c-motion-tracking
if API detected pose_data as TANGO_POSE_INVALID, the motion tracking system can be reinitialized in two ways. If config_enable_auto_recovery was set to true, the system will immediately enter the TANGO_POSE_INITIALIZING state. It will use the last valid pose as the starting point after recovery. If config_enable_auto_recovery was set to false, the system will essentially pause and always return poses as TANGO_POSE_INVALID until TangoService_resetMotionTracking() is called. Unlike auto recovery, this will also reset the starting point after recovery back to the origin.
Also you can add Handling Adverse Situations with UX-Framework to your app.
check the link:
https://developers.google.com/project-tango/ux/ux-framework-exceptions
The last solution is by write the function handle driftting by measuring velocity of pose_data and call TangoService_resetMotionTracking() and so on.
I run a filter on the intake that tries not to let obviously ridiculous pose changes through, and I believe no reported points whose texel is white nor any pose where the entire texture is in near shouting distance of black

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

A* Pathfinding - closest to unwalkable destination

I already have an A* Implementation that works. The problem is that if you pick a destination that is unwalkable, no path is returned. I want to be able to get the 'closest' I can get.
The preferable option would be completely dynamic (not just checking the 8 tiles around the destination to try to find one). That way, even if they click an unwalkable tile surrounded by a huge square of unwalkable tiles, it will still get as close as it can.
While the simple answers provided here MIGHT be sufficient enough, I think it depends on your game type and what you're trying to achieve.
For example, take this play field (sorry I'm reusing the same software I used to show you the fog of war :)) :
As you can see, an Angry Chicken is blocking the path between the left side and the right side. The Angry Chicken could be anything... if it's a static obstacle, then going with the lowest h node might be enough, but if it's a dynamic object (like a locked door, draw bridge, etc...) the following examples might help you find out how you want to solve your problem.
If we set the destination for our Hero on the other side
We need to think what we want the path to be, since obviously we can't reach it. Using a standard heuristic like manhattan distance or euclidian distance, you will get this result:
Which might be good enough, but if there's any way our little Hero could interact with the chicken to pass, it doesn't make sense at all, what you want is this
How can you do this? Well, an easy way to do this is to pathfind on hierarchical graphs. This sounds complicated but it isn't. First, you want to be able to build a new set of high level nodes and edges that will contain multiple grid nodes (or other representation, wouldn't change a thing)
As you can see, we now have a right blue node and a left red node. The arrow represents the edge between the two nodes. How to build this graph you ask? It's easy, simply start from an open node, expand all of its neighbors and add them to a high level node, when you're done, open the dynamic nodes that could lead to another part of the graph and do the same.
Now, when you ask for a path from our Hero to the red X, you first do the pathfinding on the high level... is there a way from blue node to red node? Yes! Through the chicken.
You can now easily know how to navigate on the blue side by going to the edge that will allow you to cross, which is the chicken.
If it was just a plain wall, you could determine very quickly, by visiting a single node, that there is NO way to reach on the other side and then handle it the way you want, possibly still performing an A* and returning the lowest h node.
You could keep a pointer which holds a tile with the lowest h-value, then if no path is returned simply generate a path to the tile you're holding onto instead.

Resources