Optimize route on indoor map (warehouse) with R - r

Our project is focused on optimizing routes within a warehouse in R.
We have multiple locations "from" and "to" all numbered. There are also some constraints, like you have to follow specific pathways only.
(blue - starting location, red - points to go, grey - walls (constraints, cant go through))
What is the optimal way to start off?

Without exactly understanding what you mean by "S-shape pathways", I think you could model your problem as a network flow problem, also known as min-cut/max-flow problem. This would be a linear model, you could use the lpSolveAPI package for that.

Related

Create graph of distinct routes

Problem I have a binary image with traversable and blocked cells. On this map points of interest (POIs) are set. My goal is to create a graph from these POIs respecting obstacles (see images) which represents all possible and truly distinct paths. Two paths are truly distinct if they can not be joined into one path. E.g. if the outside of the building in picture 1 was accessible a path around the building could not be merged with one through the building.
Researched I have looked at maze solvers and various shortest path finding algorithms (e.g. A*, Theta*, Phi*) and while they'd be useful for this problem they only search for a path between two points and don't consider already established routes.
Best Guess I am considering using Phi* to search for all possible routes and merge afterwards using magic (ideas?), but this will not give me truly distinct alternatives.
Can someone help?
P.S.: I'm using C++ and am not really eager to do this by myself, so if there is a library which already does this... :)
I found (and decided to use) a parallel thinning algorithm (Zhan-Suen for now) to create an image skeleton. This effectively makes the assumption that the geometry shapes the common routes, which is fine I think.
By using the Rutovitz crossing number I can extract bifurcations and crossings from the resulting skeleton. Then I'll determine the shortest line of sight from my Points of Interest (using Bresenham's algorithm) to the extracted crossings to connect them to the graph.
I hope this will help someone along the road :)

Aligning two clouds using two manually selected points

I'm maintaining software which uses PCL. I'm myself not much experienced in PCL, I've only tried some examples and tried to understand the official PCL-Ducumentation (which is unfortunately mainly sparse, doxygen-generated text). My impression is, only a PCL contributors have real change to use the library efficiently.
One feature I have to fix in the software is aligning two clouds. The clouds are two objects, which should be stacked together with a layer in-between (The actual task is to calculate the volume of the layer ).
I hope the picture explains the task well. The objects are scanned both from the sides to be stacked (one from above and the other from below). On both clouds the user selects manually two points. Then, as I hope there should be a mean in PCL to align two clouds providing the two clouds and the coordinates of the points. The alignment is required only in X-Y Plane.
Unfortunately I can't find out which function should I use for this, partly because the PCL documentation is IHMO really humble, partly because of lack of experience.
My desperate idea was to stack the clouds using P1 as the origin of both and then rotate the second cloud manually using the calculated angle between P11,P21 and P12,P22. This works, but since the task appears to me very common, I'd expect PCL to provide a dedicated function for that.
Could you point me to a proper API-function, code-snippet, example, similar project or a good book helping to understand PCL API and usage?
Many thanks!
I think this problem does not need PCL. It is simple enough to form the correct linear equation and solve it.
If you want to use PCL without worrying about the maths too much (though, if the above is a mystery to you, then studying some computational geometry would be very useful), here is my suggestion.
Most PCL operations work on 3D point clouds. I understand from your question that you only have 2D point clouds OR you don't care about the 3rd dimension. In this case if I were you I would represent the points as a 3D point cloud and set the z dimension to zero.
You will only need two point clouds with 3 points as that is how many points you are feeding to the transformation estimation algorithm. The first 2 points in the clouds will be the points chosen by the user. The third one will be any point that you have chosen that you know is the same in both clouds. You need this third one otherwise the transform is still ambiguous if it is a general transform that is being computed. You can calculate however such a point as you know 2 points already and you know that all the points are on a common plane (as you have projected them by losing the z values). Just don't choose it co-linear with the other two points. For example, halfway between the two points and 2cm in the perpendicular direction (ensuring to go in the correct direction).
Then you can use the estimateRigidTransformation functions to find the transform.
http://docs.pointclouds.org/1.7.0/classpcl_1_1registration_1_1_transformation_estimation_s_v_d.html
This function is also good for over-determined problems (it is the workhorse of the ICP algorithm in PCL) but as long as you have enough points to determine the transform it should work.

Path finding for games

What are some path finding algorithms used in games of all types? (Of all types where characters move, anyway) Is Dijkstra's ever used? I'm not really looking to code anything; just doing some research, though if you paste pseudocode or something, that would be fine (I can understand Java and C++).
I know A* is like THE algorithm to use in 2D games. That's great and all, but what about 2D games that are not grid-based? Things like Age of Empires, or Link's Awakening. There aren't distinct square spaces to navigate to, so what do they do?
What do 3D games do? I've read this thingy http://www.ai-blog.net/archives/000152.html, which I hear is a great authority on the subject, but it doesn't really explain HOW, once the meshes are set, the path finding is done. IF A* is what they use, then how is something like that done in a 3D environment? And how exactly do the splines work for rounding corners?
Dijkstra's algorithm calculates the shortest path to all nodes in a graph that are reachable from the starting position. For your average modern game, that would be both unnecessary and incredibly expensive.
You make a distinction between 2D and 3D, but it's worth noting that for any graph-based algorithm, the number of dimensions of your search space doesn't make a difference. The web page you linked to discusses waypoint graphs and navigation meshes; both are graph-based and could in principle work in any number of dimensions. Although there are no "distinct square spaces to move to", there are discrete "slots" in the space that the AI can move to and which have been carefully layed out by the game designers.
Concluding, A* is actually THE algorithm to use in 3D games just as much as in 2D games. Let's see how A* works:
At the start, you know the coordinates of your current position and
your target position. You make an optimistic estimate of the
distance to your destination, for example the length of the straight
line between the start position and the target.
Consider the adjacent nodes in the graph. If one of them is your
target (or contains it, in case of a navigation mesh), you're done.
For each adjacent node (in the case of a navigation mesh, this could
be the geometric center of the polygon or some other kind of
midpoint), estimate the associated cost of traveling along there as the
sum of two measures: the length of the path you'd have traveled so
far, and another optimistic estimate of the distance that would still
have to be covered.
Sort your options from the previous step by their estimated cost
together with all options that you've considered before, and pick
the option with the lowest estimated cost. Repeat from step 2.
There are some details I haven't discussed here, but this should be enough to see how A* is basically independent of the number of dimensions of your space. You should also be able to see why this works for continous spaces.
There are some closely related algorithms that deal with certain problems in the standard A* search. For example recursive best-first search (RBFS) and simplified memory-bounded A* (SMA*) require less memory, while learning real-time A* (LRTA*) allows the agent to move before a full path has been computed. I don't know whether these algorithms are actually used in current games.
As for the rounding of corners, this can be done either with distance lines (where corners are replaced by circular arcs), or with any kind of spline function for full-path smoothing.
In addition, algorithms are possible that rely on a gradient over the search space (where each point in space is associated with a value), rather than a graph. These are probably not applied in most games because they take more time and memory, but might be interesting to know about anyway. Examples include various hill-climbing algorithms (which are real-time by default) and potential field methods.
Methods to procedurally obtain a graph from a continuous space exist as well, for example cell decomposition, Voronoi skeletonization and probabilistic roadmap skeletonization. The former would produce something compatible with a navigation mesh (though it might be hard to make it equally efficient as a hand-crafted navigation mesh) while the latter two produce results that will be more like waypoint graphs. All of these, as well as potential field methods and A* search, are relevant to robotics.
Sources:
Artificial Intelligence: A Modern Approach, 2nd edition
Introduction to The Design and Analysis of Algorithms, 2nd edition

How can I produce visualizations combining network graphs and imaginary maps?

Basically, I'm looking for something like this awesome research project: Gmap, which was referenced in this related SO question.
It's a rather novel data visualization that combines a network graph with an imaginary set of regions that looks like a map. Basically, the map-ification helps humans comprehend the enormous data set better.
Cool, huh? GMap doesn't appear to be open source, though I plan to contact the authors.
I already know how to create a network graph with a force-directed layout (currently using Prefuse/Flare), so an answer could be a way to layer a mapping algorithm on top of an existing graph. I'm also not concerned about the client-side at all right now - this would be a backend process, and I am flexible about technology stack and data output at this stage.
There's also this paper that describes the algorithm backing GMap. If you have heard of Voronoi diagrams (which rock, but make my head hurt), this paper is for you. I quit after Calc 1, though, so I'm hoping to avoid remembering what sigmas and epsilons are.
As a start, could you do a simple closest point sort of an algorithm? So it looks something like this: You have your force directed layout and have computed some sort of bounding box. Now you want to render it. Adjust your bounding box to line up to the origin and then as you calculate the color of each pixel, find it's closest point. This should generate some semblance of regions and should be quite simple to try out. Of course, it isn't going to be as pretty as GMap, but maybe a start? The runtime would be awful, but... I don't know about you but computing boundary lines directly sounds a lot harder to me.

How can I compute the mass and moment of inertia of a polyhedron?

For use in a rigid body simulation, I want to compute the mass and inertia tensor (moment of inertia), given a triangle mesh representing the boundary of the (not necessarily convex) object, and assuming constant density in the interior.
Assuming your trimesh is closed (whether convex or not) there is a way!
As dmckee points out, the general approach is building tetrahedrons from each surface triangle, then applying the obvious math to total up the mass and moment contributions from each tet. The trick comes in when the surface of the body has concavities that make internal pockets when viewed from whatever your reference point is.
So, to get started, pick some reference point (the origin in model coordinates will work fine), it doesn't even need to be inside of the body. For every triangle, connect the three points of that triangle to the reference point to form a tetrahedron. Here's the trick: use the triangle's surface normal to figure out if the triangle is facing towards or away from the reference point (which you can find by looking at the sign of the dot product of the normal and a vector pointing at the centroid of the triangle). If the triangle is facing away from the reference point, treat its mass and moment normally, but if it is facing towards the reference point (suggesting that there is open space between the reference point and the solid body), negate your results for that tet.
Effectively what this does is over-count chunks of volume and then correct once those areas are shown to be not part of the solid body. If a body has lots of blubbery flanges and grotesque folds (got that image?), a particular piece of volume may be over-counted by a hefty factor, but it will be subtracted off just enough times to cancel it out if your mesh is closed. Working this way you can even handle internal bubbles of space in your objects (assuming the normals are set correctly). On top of that, each triangle can be handled independently so you can parallelize at will. Enjoy!
Afterthought: You might wonder what happens when that dot product gives you a value at or near zero. This only happens when the triangle face is parallel (its normal is perpendicular) do the direction to the reference point -- which only happens for degenerate tets with small or zero area anyway. That is to say, the decision to add or subtract a tet's contribution is only questionable when the tet wasn't going to contribute anything anyway.
Decompose your object into a set of tetrahedrons around the selected interior point. (That is solids using each triangular face element and the chosen center.)
You should be able to look up the volume of each element. The moment of inertia should also be available.
It gets to be rather more trouble if the surface is non-convex.
I seem to have miss-remembered by nomenclature and skew is not the adjective I wanted. I mean non-regular.
This is covered in the book "Game Physics, Second Edition" by D. Eberly. The chapter 2.5.5 and sample code is available online. (Just found it, haven't tried it out yet.)
Also note that the polyhedron doesn't have to be convex for the formulas to work, it just has to be simple.
I'd take a look at vtkMassProperties. This is a fairly robust algorithm for computing this, given a surface enclosing a volume.
If your polydedron is complicated, consider using Monte Carlo integration, which is often used for multidimensional integrals. You will need an enclosing hypercube, and you will need to be able to test whether a given point is inside or outside the polyhedron. And you will need to be patient, as Monte Carlo integration is slow.
Start as usual at Wikipedia, and then follow the external links pages for further reading.
(For those unfamiliar with Monte Carlo integration, here's how to compute a mass. Pick a point in the containing hypercube. Add to the point_total counter. Is it in the polyhedron? If yes, add to the point_internal counter. Do this lots (see the convergence and error bound estimates.) Then
mass_polyhedron/mass_hypercube \approx points_internal/points_total.
For a moment of inertia, you weight each count by the square of the distance of the point to the reference axis.
The tricky part is testing whether a point is inside or outside your polyhedron. I'm sure that there are computational geometry algorithms for that.

Resources