I want to have an estimation of the location of a user using the surrounding cell towers. For each tower, I have a location and a signal strength. Now I use a simple means of the coordinates but it is not very accurate (the user is not necessarily between the two towers).
I guess the solution is to draw a circle around each tower (the less the signal strength is, the larger it will be) and them compute the intersection between the circles. I usually don't have more than 3 cell towers.
Any idea how ? I found the Delaunay triangulation method but I don't think it applies here.
Thank you
You need to convert each signal strength to an estimate of distance and then use each distance (as the radius of a circle) in order to triangulate. You'll need at least three transmitters to resolve ambiguity, and accuracy will not be great, since signal strength is only very approximately related to distance and is affected by numerous external factors in the real world. Note that in ideal conditions, signal strength follows an inverse square law with distance.
Related
I am trying to implement a collision detection algorithm for my game that uses 2D coordinates (x, y) and quads (rectangles). I am terrible at maths, and prior to making this post I wandered through solutions on stackoverflow, which have left me even more confused as they were stacked with comments saying this doesn't work for this case, or there's a better algorithm than this one, etc...
I did manage to implement a simple AABB collision detection and resolution algorithm in the beginning but later on realized the algorithm doesn't detect cases when the object's speed is high enough for it to phase through objects.
My current thought process was to grab the old position vertices (oldTL, oldTR, oldBL, oldBR) and new position vertices (newTL, newTR, newBL, newBR) of the object, create 4 line segments represented by two points (old), (new) for each pair of vertices, and find out if they intersect any edges on any objects.
I'm very lost and would appreciate any help or feedback I could get...
I have obtained all the possible paths of maze through image processing. Now, I want to use A* algorithm in order to find shortest path of maze. However, I am confused as to whether euclidean distance will be a better heuristic or manhattan distance. Does it depend upon maze type or is the choice of heuristic independent of maze type? Which distance (manhattan or euclidean) will be a good choice for the following possible paths and why? Please suggest.
PS. (Please add your reference too, if your have any. It will be helpful)
The objective of a Heuristic is to provide contextual information to the pathfinder. The more accurate this information is, the more efficient the pathfinder can be.
You have two contradicting requirements to get a good heuristic, which is good because it means there is a sweet spot. Here they are:
An Heuristic must be admissible, which means it shall never overestimate the distance. Otherwise, the algorithm will be broken and may return paths that are not even optimal.
An heuristic must return the largest distance possible. An heuristic that underestimates the remaining path from a cell, will favour that cell when another might have been better.
Of course the optimal Heuristic would return the exact, correct length (which generally is not achievable or defeats the purpose) because it cannot return a longer path without ceasing to be admissible.
In your case, it looks like you're dealing with 4-connected grids. In that case the manhattan distance will be a better metric than euclidian distance, because the Euclidian will under-estimate the cost of all displacements compared to Manhattan (due to the Pythagorean Theorem).
Whithout any further knowledge than 'the graph is a 4-connected Grid', there is no better metric than Manhattan. If however you manage to obtain more data (obstacle density, 'highways', etc.) then you might be able to devise a better heuristic - though keeping it admissible would be a very hard problem in itself.
EDIT Having a closer look, it looks like you have angled vertices in the bottom left. If that is so, you're not in a 4-connected graph, then you MUST use Euclidian distance, because Manhattan would not be admissible.
Its not clear what moves are available to your hero. Does you graph make up a rectangular grid like chess board and can you go diagonally in one step like king in chess? If yes then Chebyshev distance is the best https://en.wikipedia.org/wiki/Chebyshev_distance.
Otherwise use Euclidian distance.
You cant use Manhattan here if you want an optimal path because Manhattan heuristic is not admissible on diagonal routes (it overestimates them) so it can lead to suboptimal pathes
I've been looking at Kevin Beason's path tracer "smallpt" (http://www.kevinbeason.com/smallpt/) and have a question regarding the mirror reflection calculation (line 62).
My understanding of the rendering equation (http://en.wikipedia.org/wiki/Rendering_equation) is that to calculate the outgoing radiance for a differential area, you integrate the incoming radiance over each differential solid angle in a hemisphere above the differential area, weighted by the BRDF and a cosine factor, with the purpose of the cosine factor being to reduce the contribution to the differential irradiance landing on the area for incoming light at more grazing angles (as light at these angles will be spread across a larger area, meaning that the differential area in question will receive less of this light).
But in the smallpt code this cosine factor is not part of the calculation for mirror reflection on line 62. (It is also omitted from the diffuse calculation, but I believe that to be because the diffuse ray is chosen with cosine-weighted importance sampling, meaning that an explicit multiplication by the cosine factor is not needed).
My question is why doesn't the mirror reflection calculation require the cosine factor? If the incoming radiance is the same, but the angle of incidence becomes more grazing, then won't the irradiance landing on the differential area be decreased, regardless of whether diffuse or mirror reflection is being considered?
This is a question that I have raised recently: Why the BRDF of specular reflection is infinite in the reflection direction?
For perfect specular reflection, BRDF is infinite in the reflection direction. So we can't integrate for rendering equation.
But we can make reflected radiance equal the incident according to energy conservation.
The diffuse light paths are, as you suspect, chosen such that the cosine term is balanced out by picking rays proportionally more often in the direction where the cosine would have been higher (i.e. closer to the direction of the surface normal) a good explanation can be found here. This makes the simple division by the number of samples enough to accurately model diffuse reflection.
In the rendering equation, which is the basis for path tracing, there is a term for the reflected light:
Here
represents the BRDF of the material. For a perfect reflector this BRDF would be zero in every direction except for in the reflecting direction. It then makes little sense to sample any other direction than the reflected ray path. Even so, the dot product at the end would not be omitted.
But in the smallpt code this cosine factor is not part of the
calculation for mirror reflection on line 62.
By the definitions stated above, my conclusion is that it should be part of it, since this would make it needless to specify special cases for one material or another.
That's a very good question. I don't understand it fully, but let me attempt to give an answer.
In the diffuse calculation, the cosine factor is included via the sampling. Out of the possible halfsphere of incidence rays, it is more likely a priori that one came directly from above than directly from the horizon.
In the mirror calculation, the cosine factor is included via the sampling. Out of the possible single direction that an incidence ray could have come from, it is more likely a priori - you see where I'm going.
If you sampled coarse reflection via a cone of incoming rays (as for a matte surface) you would again need to account for cosine weighting. However, for the trivial case of a single possible incidence direction, sampling reduces to if true.
From formal perspective cosine factor in the integral cancels out with cosine in denominator of specular BRDF (f_r = delta(omega_i, omega_o)/dot(omega_i, n)).
In different literature, the ideal mirror BRDF is defined by
a specular albedo
dirac deltas (infinite in direction of perfect reflectance, zero everywhere else) and
and 1/cos(theta_i) canceling out the cosine term in the rendering equation.
See e.g.: http://resources.mpi-inf.mpg.de/departments/d4/teaching/ws200708/cg/slides/CG07-Brdf+Texture.pdf, Slide 12
For an intuition of the third point, consider that the differential footprint of the surface covered by a viewing ray from direction omega_r is the same as the footprint of the surface covered by the incident ray from direction omega_i. Thus, all incident radiance is reflected towards omgea_r, independent of the angle of incidence.
If I have a mesh of triangles, how does one go about calculating the normals at each given vertex?
I understand how to find the normal of a single triangle. If I have triangles sharing vertices, I can partially find the answer by finding each triangle's respective normal, normalizing it, adding it to the total, and then normalizing the end result. However, this obviously does not take into account proper weighting of each normal (many tiny triangles can throw off the answer when linked with a large triangle, for example).
I think a good method should be using a weighted average but using angles instead of area as weights. This is in my opinion a better answer because the normal you are computing is a "local" feature so you don't really care about how big is the triangle that is contributing... you need a sort of "local" measure of the contribution and the angle between the two sides of the triangle on the specified vertex is such a local measure.
Using this approach a lot of small (thin) triangles doesn't give you an unbalanced answer.
Using angles is the same as using an area-weighted average if you localize the computation by using the intersection of the triangles with a small sphere centered in the vertex.
The weighted average appears to be the best approach.
But be aware that, depending on your application, sharp corners could still give you problems. In that case, you can compute multiple vertex normals by averaging surface normals whose cross product is less than some threshold (i.e., closer to being parallel).
Search for Offset triangular mesh using the multiple normal vectors of a vertex by SJ Kim, et. al., for more details about this method.
This blog post outlines three different methods and gives a visual example of why the standard and simple method (area weighted average of the normals of all the faces joining at the vertex) might sometimes give poor results.
You can give more weight to big triangles by multiplying the normal by the area of the triangle.
Check out this paper: Discrete Differential-Geometry Operators for Triangulated 2-Manifolds.
In particular, the "Discrete Mean Curvature Normal Operator" (Section 3.5, Equation 7) gives a robust normal that is independent of tessellation, unlike the methods in the blog post cited by another answer here.
Obviously you need to use a weighted average to get a correct normal, but using the triangles area won't give you what you need since the area of each triangle has no relationship with the % weight that triangles normal represents for a given vertex.
If you base it on the angle between the two sides coming into the vertex, you should get the correct weight for every triangle coming into it. It might be convenient if you could convert it to 2d somehow so you could go off of a 360 degree base for your weights, but most likely just using the angle itself as your weight multiplier for calculating it in 3d space and then adding up all the normals produced that way and normalizing the final result should produce the correct answer.
I'm looking for a way to determine the optimal X/Y/Z rotation of a set of vertices for rendering (using the X/Y coordinates, ignoring Z) on a 2D canvas.
I've had a couple of ideas, one being pure brute-force involving performing a 3-dimensional loop ranging from 0..359 (either in steps of 1 or more, depending on results/speed requirements) on the set of vertices, measuring the difference between the min/max on both X/Y axis, storing the highest results/rotation pairs and using the most effective pair.
The second idea would be to determine the two points with the greatest distance between them in Euclidean distance, calculate the angle required to rotate the 'path' between these two points to lay along the X axis (again, we're ignoring the Z axis, so the depth within the result would not matter) and then repeating several times. The problem I can see with this is first by repeating it we may be overriding our previous rotation with a new rotation, and that the original/subsequent rotation may not neccesarily result in the greatest 2D area used. The second issue being if we use a single iteration, then the same problem occurs - the two points furthest apart may not have other poitns aligned along the same 'path', and as such we will probably not get an optimal rotation for a 2D project.
Using the second idea, perhaps using the first say 3 iterations, storing the required rotation angle, and averaging across the 3 would return a more accurate result, as it is taking into account not just a single rotation but the top 3 'pairs'.
Please, rip these ideas apart, give insight of your own. I'm intreaged to see what solutions you all may have, or algorithms unknown to me you may quote.
I would compute the principal axes of inertia, and take the axis vector v with highest corresponding moment. I would then rotate the vertices to align v with the z-axis. Let me know if you want more details about how to go about this.
Intuitively, this finds the axis about which it's hardest to rotate the points, ie, around which the vertices are the most "spread out".
Without a concrete definition of what you consider optimal, it's impossible to say how well this method performs. However, it has a few desirable properties:
If the vertices are coplanar, this method is optimal in that it will always align that plane with the x-y plane.
If the vertices are arranged into a rectangular box, the box's shortest dimension gets aligned to the z-axis.
EDIT: Here's more detailed information about how to implement this approach.
First, assign a mass to each vertex. I'll discuss options for how to do this below.
Next, compute the center of mass of your set of vertices. Then translate all of your vertices by -1 times the center of mass, so that the new center of mass is now (0,0,0).
Compute the moment of inertia tensor. This is a 3x3 matrix whose entries are given by formulas you can find on Wikipedia. The formulas depend only on the vertex positions and the masses you assigned them.
Now you need to diagonalize the inertia tensor. Since it is symmetric positive-definite, it is possible to do this by finding its eigenvectors and eigenvalues. Unfortunately, numerical algorithms for finding these tend to be complicated; the most direct approach requires finding the roots of a cubic polynomial. However finding the eigenvalues and eigenvectors of a matrix is an extremely common problem and any linear algebra package worth its salt will come with code that can do this for you (for example, the open-source linear algebra package Eigen has SelfAdjointEigenSolver.) You might also be able to find lighter-weight code specialized to the 3x3 case on the Internet.
You now have three eigenvectors and their corresponding eigenvalues. These eigenvalues will be positive. Take the eigenvector corresponding to the largest eigenvalue; this vector points in the direction of your new z-axis.
Now, about the choice of mass. The simplest thing to do is to give all vertices a mass of 1. If all you have is a cloud of points, this is probably a good solution.
You could also set each star's mass to be its real-world mass, if you have access to that data. If you do this, the z-axis you compute will also be the axis about which the star system is (most likely) rotating.
This answer is intended to be valid only for convex polyhedra.
In http://203.208.166.84/masudhasan/cgta_silhouette.pdf you can find
"In this paper, we study how to select view points of convex polyhedra such that the silhouette satisfies certain properties. Specifically, we give algorithms to find all projections of a convex polyhedron such that a given set of edges, faces and/or vertices appear on the silhouette."
The paper is an in-depth analysis of the properties and algorithms of polyhedra projections. But it is not easy to follow, I should admit.
With that algorithm at hand, your problem is combinatorics: select all sets of possible vertexes, check whether or not exist a projection for each set, and if it does exists, calculate the area of the convex hull of the silhouette.
You did not provide the approx number of vertex. But as always, a combinatorial solution is not recommended for unbounded (aka big) quantities.