Find the path and max-weigted edge in a Minimax path-finding solution? - path-finding

I am currently having a programing assignment: given a large weighted unconnected graph (1 < V < 2000,
0 < E < 100 000). Find the maximum-weighted edge along the minimum-weighted path from "source" to point "destination".
What I've got so far is storing the graph in an AdjacencyList (Vector of Vector of IntegerPair where first integer is the neighbor and second is the weight of the edge).
I've also obtained the Minimum Spanning Tree by using Prim's algorithm:
private static void process(int vtx) {
taken.set(vtx, true);
for (int j = 0; j < AdjList.get(vtx).size(); j++) {
IntegerPair v = AdjList.get(vtx).get(j);
if (!taken.get(v.first())) {
pq.offer(new IntegerPair(v.second(), v.first())); //sort by weight then by adjacent vertex
}
}
}
void PreProcess() {
Visited = new Vector<Boolean>();
taken = new Vector<Boolean>();
pq = new PriorityQueue<IntegerPair>();
taken.addAll(Collections.nCopies(V, false));
process(0);
int numTaken = 1;
int mst_cost = 0;
while (!pq.isEmpty() && numTaken != V) { //do this until all V vertices are taken (or E = V - 1 edges are taken)
IntegerPair front = pq.poll();
if (!taken.get(front.second())) { // we have not connected this vertex yet
mst_cost += front.first(); // add the weight of this edge
process(front.second());
numTaken++;
}
}
}
What I am stuck at now is how to find the path from source to destination and return the maxmum weight edge in the below query:
int Query(int source, int destination) {
int ans = 0;
return ans;
}
I was told to use Depth-First Search to traverse the resulting MST but I think the DFS will traverse all vertices that are not on the correct path (am I right?). And how to find the maximum edge?
(This problem is not related to any SSSP algorithm because I haven't been taught Dijstra's, etc.)

One possible way to do this would be to use Kruskal's MST algorithm. It is a greedy algorithm that will start with an empty graph, the repeatedly add the lightest edge that does not produce a cycle. This satisfies the properties of a tree, while assuring the minimum-weighted path.
To find the maximum weighted edge, you can also use the properties of the algorithm. Since you know the that EdgeWeight(n) =< EdgeWeight(n+1), the last edge you add to the graph will be the maximum edge.

Related

Sway of Binary Tree

Original Problem
The problem is as stated and my solution is below: Return the amount a BST tree sways in one direction.
Sway is denoted by the amount of nodes that are
"unbalanced" - nullptr on only one side, a left
swaying tree returns the negative amount it sways
with any right sway offsetting the left and vice versa
int tree_sway(Node * node){
if(!node){
return 0;
}
int m = tree_sway(node->right) + 1;
int n = tree_sway(node->left) - 1;
return m - n;
}
For the tree sway problem, is the solution I have posted correct? If not, would the only way to do this problem be to create a helper function that keeps track of how many left and right turns the recursive step makes?
The code that you have posted is not quite correct. For example, on a tree with a root and a leaf, the result will always be 0 regardless of which side the leaf is on. One way of doing it this:
int tree_swap(Node *node) {
# base case of the recursion
if (!node) { return 0; }
# go down the left and right branches and add their scores
int score = tree_swap(node->left) + tree_swap(node->right);
# adjust the score for the missing children of the current node
if (!node->left) { score++; }
if (!node->right) { score--; }
return score;
}
The general idea is that as you recurse, you first go all the way down the tree and as you come back up you count the missing left and right branches and pass the running tally up the tree.

Extend 2d polygon by vector

I have convex polygons, and I want to extend them by projecting along a vector like so:
(Original polygon and vector on left, desired result on right.)
My polygons are stored as a series of points with counter-clockwise winding. What I want to find are the "starting" and "stopping" point that I need to project from, as in the circled vertices below.
(The green arrows are to indicate the polygon's winding, giving the "direction" of each edge.)
My original plan was to determine which points to use by projecting a ray with the vector's direction from each point, and finding the first and last points whose ray doesn't intersect an edge. However, that seems expensive.
Is there a way I can use the edge directions vs the vector direction, or a similar trick, to determine which points to extend from?
Look at points where the direction of the vector falls between the directions of the edges.
In other words, take three vectors:
of the edge leading out of the vertex
of the translation vector
opposite to the edge leading to the vertex
If they are in this order when going CCW, i.e. if the second vector is between the first and the third, this is an "inside" point.
In order to determine whether a vector lies between two other vectors, use cross product as described e.g. here.
Yes you can. You want to project along x, y. So the normal is y, -x. Now rotate by that (atan2, or you can you it directly if you understand rotation matrices). The points to project from and now the minimum and maximum x, You can also speed up the projection by always doing it along an axis then rotating back.
n.m. answered the question as I asked and pictured it, but upon programming I soon noticed that there was a common case where all vertices would be "outside" vertices (this can be easily seen on triangles, and can occur for other polygons too).
The text explanation.
The solution I used was to look at the normal vectors of the edges leading into and exiting each vertex. The vertices we want to extend are vertices that have at least one edge normal with a minimum angle of less than 90 degrees to the delta vector we are extending by.
The outward-facing edge normals on a counterclockwise-wound polygon can be found by:
normal = (currentVertex.y - nextVertex.y, nextVertex.x - currentVertex.x)
Note that since we don't care about the exact angle, we don't need to normalize (make a unit vector of) the normal, which saves a square root.
To compare it to the delta vector, we use the dot product:
dot = edgeNormal.dot(deltaVector)
If the result is greater than zero, the minimum angle is acute (less than 90). If the result is exactly zero, the vectors are perpendicular. If the result is less than zero, the minimum angle is obtuse. It is worth noting when the vectors are perpendicular, since it lets us avoid adding extra vertices to the extended polygon.
If you want to visualize how the angle works with the dot product, like I did, just look at a graph of arc cosine (normally you get the angle via acos(dot)).
Now we can find the vertices that have one acute and one not-acute minimum angle between its edge normals and the delta vector. Everything on the "acute side" of these vertices has the delta vector added to it, and everything on the "obtuse side" stays the same. The two boarder vertices themselves are duplicated, having one extended and one staying the same, unless the "obtuse side" is exactly perpendicular to the delta vector (in this case we only need to extend the vertex, since otherwise we would have two vertices on the same line).
Here is the C++ code for this solution.
It may look a little long, but it is actually quite straightforward and has many comments so it hopefully shouldn't be hard to follow.
It is part of my Polygon class, which has a std::vector of counterclockwise-wound vertices. units::Coordinate are floats, and units::Coordinate2D is a vector class that I feel should be self-explanatory.
// Compute the normal of an edge of a polygon with counterclockwise winding, without normalizing it to a unit vector.
inline units::Coordinate2D _get_non_normalized_normal(units::Coordinate2D first, units::Coordinate2D second) {
return units::Coordinate2D(first.y - second.y, second.x - first.x);
}
enum AngleResult {
ACUTE,
PERPENDICULAR,
OBTUSE
};
// Avoid accumulative floating point errors.
// Choosing a good epsilon is extra important, since we don't normalize our vectors (so it is scale dependent).
const units::Coordinate eps = 0.001;
// Check what kind of angle the minimum angle between two vectors is.
inline AngleResult _check_min_angle(units::Coordinate2D vec1, units::Coordinate2D vec2) {
const units::Coordinate dot = vec1.dot(vec2);
if (std::abs(dot) <= eps)
return PERPENDICULAR;
if ((dot + eps) > 0)
return ACUTE;
return OBTUSE;
}
Polygon Polygon::extend(units::Coordinate2D delta) const {
if (delta.isZero()) { // Isn't being moved. Just return the current polygon.
return Polygon(*this);
}
const std::size_t numVerts = vertices_.size();
if (numVerts < 3) {
std::cerr << "Error: Cannot extend polygon (polygon invalid; must have at least three vertices).\n";
return Polygon();
}
// We are interested in extending from vertices that have at least one edge normal with a minimum angle acute to the delta.
// With a convex polygon, there will form a single contiguous range of such vertices.
// The first and last vertex in that range may need to be duplicated, and then the vertices within the range
// are projected along the delta to form the new polygon.
// The first and last vertices are defined by the vertices that have only one acute edge normal.
// Whether the minimum angle of the normal of the edge made from the last and first vertices is acute with delta.
const AngleResult firstEdge = _check_min_angle(_get_non_normalized_normal(vertices_[numVerts-1], vertices_[0]), delta);
const bool isFirstEdgeAcute = firstEdge == ACUTE;
AngleResult prevEdge = firstEdge;
AngleResult currEdge;
bool found = false;
std::size_t vertexInRegion;
for (std::size_t i = 0; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (isFirstEdgeAcute != (currEdge == ACUTE)) {
// Either crossed from inside to outside the region, or vice versa.
// (One side of the vertex has an edge normal that is acute, the other side obtuse.)
found = true;
vertexInRegion = i;
break;
}
prevEdge = currEdge;
}
if (!found) {
// A valid polygon has two points that define where the region starts and ends.
// If we didn't find one in the loop, the polygon is invalid.
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
found = false;
std::size_t first, last;
// If an edge being extended is perpendicular to the delta, there is no need to duplicate that vertex.
bool shouldDuplicateFirst, shouldDuplicateLast;
// We found either the first or last vertex for the region.
if (isFirstEdgeAcute) {
// It is the last vertex in the region.
last = vertexInRegion;
shouldDuplicateLast = currEdge != PERPENDICULAR; // currEdge is either perpendicular or obtuse.
// Loop backwards from the end to find the first vertex.
for (std::size_t i = numVerts - 1; i > 0; --i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i-1], vertices_[i]), delta);
if (currEdge != ACUTE) {
first = i;
shouldDuplicateFirst = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
} else {
// It is the first vertex in the region.
first = vertexInRegion;
shouldDuplicateFirst = prevEdge != PERPENDICULAR; // prevEdge is either perpendicular or obtuse.
// Loop forwards from the first vertex to find where it ends.
for (std::size_t i = vertexInRegion + 1; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (currEdge != ACUTE) {
last = i;
shouldDuplicateLast = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
// The edge normal between the last and first vertex is the only non-acute edge normal.
last = numVerts - 1;
shouldDuplicateLast = firstEdge != PERPENDICULAR;
}
}
// Create the new polygon.
std::vector<units::Coordinate2D> newVertices;
newVertices.reserve(numVerts + (shouldDuplicateFirst ? 1 : 0) + (shouldDuplicateLast ? 1 : 0) );
for (std::size_t i = 0; i < numVerts; ++i) {
// Extend vertices in the region first-to-last inclusive. Duplicate first/last vertices if required.
if (i == first && shouldDuplicateFirst) {
newVertices.push_back(vertices_[i]);
newVertices.push_back(vertices_[i] + delta);
} else if (i == last && shouldDuplicateLast) {
newVertices.push_back(vertices_[i] + delta);
newVertices.push_back(vertices_[i]);
} else {
newVertices.push_back( isFirstEdgeAcute ? // Determine which range to use.
( (i <= last || i >= first) ? vertices_[i] + delta : vertices_[i] ) : // Range overlaps start/end of the array.
( (i <= last && i >= first) ? vertices_[i] + delta : vertices_[i] )); // Range is somewhere in the middle of the array.
}
}
return Polygon(newVertices);
}
So far I tested this code with triangles, rectangles, approximated circles, and arbitrary convex polygons made by extending the approximated circles sequentially by many different delta vectors.
Please note that this solution is still only valid for convex polygons.

How many components in a directed graph?

I have the following graph:
The optimal solution is to start dfs from vertex (3) then i will get one component, but when we start the dfs from vertex (1) then (3) i will get two components.
The question is:
I want to know how many components in this graph? or on other way, what is the minimum number of dfs needed to cover all the graph?
what is the needed algorithm for doing this?
You are confusing two definitions.
For undirected graphs there is the notion of connected components, which you find by performing a DFS on the undirected graph.
For directed graphs there is the notion of strongly connected components, for which multiple algorithms are available, all slightly more complicated than a simple DFS.
What you should do depends on which of the two notions you need. Your graph has one connected component when viewed as an undirected graph, and two strongly connected components when viewed as a directed graph.
I know this is an old thread but it would be helpful to add how I solved the problem:
1 - Find the strongly connected components (SCC) inside the graph, and for each SCC, we can replace it with a single node that represents that SCC.
2 - Now, we can know the min number of nodes that we can run DFS from and cover all the other nodes by looking at the degrees of the nodes in the new graph, so we can start DFS from the nodes with zero in degree.
I believe you are trying to find weakly connected components .
Testing whether a directed graph is weakly connected can be done easily in linear time. Simply turn all edges into undirected edges and use the DFS-based connected components algorithm.
To solve this exercise, the idea is to work with an adjacency list of IN-edges.
Take as an example a graph with 3 nodes and 2 edges (0,1) and (0,2), whereas (a,b) indicates that if you switch light a, then light b will also be switched.
OUT-edges adjacency list:
0 -> 1, 2
1 -> _
2 -> _
IN-edges adjacency list:
0 -> _
1 -> 0
2 -> 0
Assuming we don't have cycles, if we follow the IN-edges down until you reach a node which does not have a child, you get what I call an "influencer", that is, you can not switch it with any other node.
Now to take account for cycles I check if any neighbor has or is an influencer. If this is not the case and all neighbors have been visited already, I encounter a cycle and make the current node an influencer.
This is my code (tested with simple examples on my desk):
private int numberOfLights(){
Scanner scanner = new Scanner(System.in);
int n = scanner.nextInt();
List<List<Integer>> inAdjList = new ArrayList<List<Integer>>();
for(int i = 0; i < n; i++){
inAdjList.add(new ArrayList<>());
}
for(int i = 0; i < n; i++){
int from = scanner.nextInt();
int to = scanner.nextInt();
inAdjList.get(to).add(from);
}
int[] visited = new int[n];
int[] isOrHasInfluencer = new int[n];
List<Integer> influencers = new ArrayList<>();
for(int i = 0; i < n; i++){
if(!visited[i]){
DFS(i, visited, isOrHasInfluencer, influencers, inAdjList);
}
}
return influencers.size();
}
private void DFS(Integer cur, int[] visited, int[] isOrHasInfluencer, List<Integer> influencers, List<List<Integer>> inAdjList){
visited[cur] = true;
boolean hasUnvisitedChildren = false;
for(Integer neighbor : inAdjList.get(cur)){
if(!visited[neighbor]){
hasUnvisitedChildren = true;
DFS(neighbor, visited, isOrHasInfluencer, influencers, inAdjList);
}
if(isOrHasInfluencer[neighbor]){
isOrHasInfluencer[cur] = true;
}
}
if(!hasUnvisitedChildren && !isOrHasInfluencer[cur]){
isOrHasInfluencer[cur] = true;
influencers.add(cur);
}
}
Hope that helps! :)
You can run findConnectedComponent algorithm from each of nodes and return the minimum number to solve your specific problem.
I think this solution is not optimal but it works very well with small to medium sized input graph.
Old question, but there is another way.
Convert the directed graph to an undirected one by doing a DFS on all the nodes. O(V+E)
Do what you do for undirected graph. O(V+E)

How to convert a directed graph to an undirected graph?

How do you convert a directed graph to an undirected graph using an adjacency matrix?
public directedToUndirected(boolean[][] adjMatrix) {
}
Do a boolean OR operation with the original matrix and the transpose of the original.
You could loop through the whole array and flip the indices, assuming you're using 0 and 1's
for (int i=0; i<adjMatrix.length; i++) {
for (int j=0; j<adjMatrix[i].length; j++) {
if (adjMatrix[i][j] == 1) {
adjMatrix[j][i] = 1;
}
}
}
You must find any method that need not to change the information. In all of the above suggested methods edge between node i to j is given to j to i and it is wrong because there is not such edge.
You can change the graph into larger one and make some virtual connections. For example if i has connection to j and j has connection to i you can change i to i' and define new edge between j and i. You also should save this fact that i' is the same as i.

CSG operations on implicit surfaces with marching cubes

I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not.
For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result.
void march(const vec2& cmin, //min x and y for the grid cell
const vec2& cmax, //max x and y for the grid cell
std::vector<vec2>& tri,
float iso,
float (*cmp1)(const vec2&), //distance from stationary circle
float (*cmp2)(const vec2&) //distance from moving circle
)
{
unsigned int squareindex = 0;
float scalar[4];
vec2 verts[8];
/* initial setup of the grid cell */
verts[0] = vec2(cmax.x, cmax.y);
verts[2] = vec2(cmin.x, cmax.y);
verts[4] = vec2(cmin.x, cmin.y);
verts[6] = vec2(cmax.x, cmin.y);
float s1,s2;
/**********************************
********For-loop of interest******
*******Set difference between ****
*******two implicit surfaces******
**********************************/
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
if((s1 < iso)){ //if inside circle1
if((s2 < iso)){ //if inside circle2
scalar[i] = s2; //then set the scalar to the moving circle
} else {
scalar[i] = s1; //only inside circle1
squareindex |= (1<<i); //mark as inside
}
}
else {
scalar[i] = s1; //inside neither circle
}
}
if(squareindex == 0)
return;
/* Usual interpolation between edge points to compute
the new intersection points */
verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]);
verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]);
verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]);
verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]);
for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token
int index = triTable[squareindex][i]; //look up our indices for triangulation
if(index == -1)
break;
tri.push_back(verts[index]);
}
}
This gives me weird jaggies:
(source: mechcore.net)
It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this.
A full testcase can be downloaded HERE
EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)
EDIT 2: Here are some new images after implementing the answerer's whitepaper:
1.Difference
2.Intersection
3.Union
EDIT 3: I implemented this in 3D too, with proper shading/lighting:
1.Difference between a greater sphere and a smaller sphere
2.Difference between a greater sphere and a smaller sphere in the center, clipped by two planes on both sides, and then union with a sphere in the center.
3.Union between two cylinders.
This is not how you mix the scalar fields. Your scalars say one thing, but your flags whether you are inside or not say another. First merge the fields, then render as if you were doing a single compound object:
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
s = max(s1, iso-s2); // This is the secret sauce
if(s < iso) { // inside circle1, but not inside circle2
squareindex |= (1<<i);
}
scalar[i] = s;
}
This article might be helpful: Combining CSG modeling with soft blending using
Lipschitz-based implicit surfaces.

Resources