I am new to combinatorics problems and trying to understand how to solve this problem, I understand that nC2 is finding the numbers where order matters, but after that I have no idea how to proceed further in the math problem. Please explain further, no code needed.
https://www.hackerearth.com/practice/math/combinatorics/inclusion-exclusion/practice-problems/algorithm/aryan-and-consulting-sessions-0e0656ab/
Let students are graph vertices, possible pairs are edges. This graph is complete K_n, number of edges is p = n*(n-1)/2 (nC2 as you wrote)
We need to find number of edge covers for this graph.
I wanted to count numbers of edge covers containing from (p+1)/2 to p edges, but seems values are too big and this is rather complex problem.
But we can find formula for counting overall quantity of edge covers for complete graph K_n here in OEIS
a(n) = Sum_{k=0..n} (-1)^(n-k)*binomial(n, k)*2^binomial(k, 2)
You also need to calculate binomial coefficients modulo m
Related
Given (N+1) points. All the N points lie on x- axis. Remaining one point (HEAD point) lies anywhere in the coordinate plane.
Given a START point on x- axis.
Find the shortest distance to cover all the points starting from START point.We can traverse a point multiple times.
Example N+1=4
points on x axis
(0,1),(0,2),(0,3)
HEAD Point
(1,1) //only head point can lie anywhere //Rest all on x axis
START Point
(0,1)
I am looking for a method as of how to approach this problem.
Whether we should visit HEAD point first or HEAD point in between.
I tried to find a way using Graph Theory to simplify this problem and reduce the paths that need to be considered. If there is an elegant way to represent this problem using graphs to identify a solution, I was not able to find it. This approach becomes very inefficient as the n increases - the time and memory is O(2^n).
Looking at this as a tree graph, the root node would be the START point, then each of its child nodes would be the points it is connected to.
Since the START point and the rest of the points aside from the HEAD all lay on the x-axis, all non-HEAD points only need to be connected to adjacent points on the x-axis. This is because the distance of the path between any two points is the sum of the distances between any adjacent points along the path between those two points (the subset of nodes representing points on the x-axis does not need to form a complete graph). This reduces the brute force approach some.
Here's a simple example:
The upper left shows the original problem: points on the x-axis along with the START and HEAD points.
In the upper right, this has been transformed into a graph with each node representing a point from the original problem. The edges represent the paths that can be taken between points. This assumes that the START point only represents the first point in the path. Unlike the other nodes, it is only included in the path once. If that is not the case and the path can return to the START point, this would approximately double the possible paths, but the same approach can be followed.
In the bottom left, the START point, a, is the root of a tree graph, and each node connected to the START point is a child node. This process is repeated for each child node until either:
A path that is obviously not optimal is identified, in which case that node can just be excluded from the graph. See the nodes in red boxes; going back and forth between the same nodes is unnecessary.
All points are included when traversing the tree from the root to that node, producing a potential solution.
Note that when creating the tree graph, each time a node is repeated, its "potential" child nodes are the same as the first time the node was included. By "potential", I mean cases above still need to be checked, because the result might include a nonsensical path, in which case that node would not be included. It is also possible a potential solution results from the path after its child nodes are included.
The last step is to add up the distances for each of the potential solutions to determine which path is shortest.
This requires a careful examination of the different cases.
Assume for now START (S) is on the far left, and HEAD (H) is somewhere in the middle the path maybe something like
H
/ \
S ---- * ----*----* * --- * ----*
Or it might be shorter to from H to and from the one of the other node
H
//
S ---- * --- * -- *----------*---*
If S is not at one end you might have something like
H
/ \
* ---- * ----*----* * --- * ----*
--------S
Or even going direct from S to H on the first step
H
/ |
* ---- * ----*----* |
S
A full analysis of cases would be quite extensive.
Actually solving the problem, might depend on the number of nodes you have. If the number is small < 10, then compete enumeration might be possible. Just work out every possible path, eliminate the ones which are illegal, and choose the smallest. The number of paths is I think in the order of n!, so its computable for small n.
For large n you can break the problem into small segments. I think its enough just to consider a small patch with nodes either side of H and a small patch with nodes either side of S.
This is not really a solution, but a possible way to think about tackling the problem.
(To be pedantic stackoverflow.com is not the right site for this question in the stack exchange network. Computational Science : algorithms might be a better place.
This is a fun problem. First, lets try to find a brute force solution, as Poosh did.
Observations about the Shortest Path
No repeated points
You are in an Euclidean geometry, thus the triangle inequality holds: For all points a,b,c, the distance d(a,b) + d(b,c) <= d(a,c). Thus, whenever you have an optimal path that contains a point that occurs more than once, you can remove one of them, which means it is not an optimal path, which leads to a contradiction and proves that your optimal path contains each point exactly once.
Permutations
Our problem is thus to find the permutation, lets call it M_i, of the numbers 1...n for points P1...Pn (where P0 is the fixed start point and Pn the head point, P1...Pn-1 are ordered by increasing x value) that minimizes the sum of |(P_M_i)-(P_M_(i-1))| for i from 1 to n, || being the vector length sqrt(v_x²+v_y²).
The number of permutations of a set of size n is n!. In this case we have n+1 points, so a brute force approach testing all permutations would have complexity (n+1)!, which is higher than even 2^n and definitely not practical, so we need further observations to improve this.
Next Steps
My next step would now be to see if there are any other sequences that can be proven to be not optimal, leading to a reduction in the number of candidates to be tested.
Paths of non-head points
Lets look at all paths (sequences of indices of points that don't contain a head point and that are parts of the optimal path. If we don't change the start and end point of a path, then any other transpositions have no effect on the outside environment and we can perform purely local optimizations. We can prove that those sequences must have monotonic (increasing or decreasing) x coordinate values and thus monotonic indices (as they are ordered by ascending x coordinate between indices 0 and n-1):
We are in a purely one dimensional subspace and the total distance of the path is thus equal to the sum of the absolute values of the differences in x coordinates between one such point and the next. It is clear that this sum is minimized by ordering by x coordinate in either ascending or descending order and thus ordering the indices in the same way. Note that this is true for maximal such paths as well as for all continuous "subpaths" of them.
Wrapping it up
The only choices we have left are:
where do we place the head node in the optimal path?
which way do we order the two paths to the left and right?
This means we have n values for the index of the head node (1...n, 0 is fixed as the start node) and 2x2 values for sort order. So we have 4n choices which we can all calculate and pick the shortest one. One of the sort orders probably determines the other but I leave that to you.
Anyways, the complexity of this algorithm is O(4n) = O(n). Because reading in the input of the problems is in O(n) and writing the output is as well, I believe that is an algorithm of optimal complexity. However, if we could reformulate the problem somewhat, so that we could read and write the input and output in some compressed form, as in only the parameters that we actually need to solve the problem, then it is possible that we could do better.
P.S.: I'm not a mathematician so I probably used wrong words for some concepts and missed the usual notation for the variables and functions. I would be glad for some expert to check this for any obvious errors.
I have a graph with the given constraints:
1. The graph is directed
2. At most 2 edges emerge from a vertex
3. At most 2 edges lead into a vertex
In this graph, I want to find cycles, so that:
a) each vertex is only a part of one cycle
b) every vertex is part of some cycle
I've spent some time on this problem already, but as easy as it sounds,
It doesn't seem so trivial.
An extension of the question is as follows:
Each edge additionally has a span, two real numbers,
and a 'pivot' must be selected,
so that all edges that end up in the cycles
can 'fit' pivot into their span range.
I.e. for pivot 100 one can use edge [20, 100], because 100 is in [20, 120]
Your problem is called finding a vertex-disjoint cycle cover. It can be found in polynomial time as described in this answer
Your extended question is not clear to me. Are you given the pivot? Then you'd simply remove the non-fitting edges. Do you have to chose one? With luck you may be able to modify the perfect matching search to satisfy this. But careful, such additional conditions easily make the difference between a polynomial and NP-hard problem.
My problem is a generalization of a task solved by [Blossom algorithm] by Edmonds. The original task is the following: given a complete graph with weighted undirected edges, find a set of edges such that
1) every vertex of the graph is adjacent to only one edge from this set (i.e. vertices are grouped into pairs)
2) sum over weights of edges in this set is minimal.
Now, I would like to modify the first goal into
1') vertices are grouped into sets of 3 vertices (or in general, d vertices), and leave condition 2) unchanged.
My questions:
Do you know if this 'generalised' problem has a name?
Do you know about an algorithm solving it in number of steps being polynomial of number of vertices (like Blossom algorithm for an original problem)? I don't see a straightforward generalisation of Blossom algorithm, as it is based on looking for augmenting paths on a graph compressed to a bipartite graph (and uses here Hungarian algorithm). But augmenting paths do not seem to point to groups of vertices different than pairs.
Best regards,
Paweł
I don't want to find all the minimum spanning trees but I want to know how many of them are there, here is the method I considered:
Find one minimum spanning tree using prim's or kruskal's algorithm and then find the weights of all the spanning trees and increment the running counter when it is equal to the weight of minimum spanning tree.
I couldn't find any method to find the weights of all the spanning trees and also the number of spanning trees might be very large, so this method might not be suitable for the problem.
As the number of minimum spanning trees is exponential, counting them up wont be a good idea.
All the weights will be positive.
We may also assume that no weight will appear more than three times in the graph.
The number of vertices will be less than or equal to 40,000.
The number of edges will be less than or equal to 100,000.
There is only one minimum spanning tree in the graph where the weights of vertices are different. I think the best way of finding the number of minimum spanning tree must be something using this property.
EDIT:
I found a solution to this problem, but I am not sure, why it works. Can anyone please explain it.
Solution: The problem of finding the length of a minimal spanning tree is fairly well-known; two simplest algorithms for finding a minimum spanning tree are Prim's algorithm and Kruskal's algorithm. Of these two, Kruskal's algorithm processes edges in increasing order of their weights. There is an important key point of Kruskal's algorithm to consider, though: when considering a list of edges sorted by weight, edges can be greedily added into the spanning tree (as long as they do not connect two vertices that are already connected in some way).
Now consider a partially-formed spanning tree using Kruskal's algorithm. We have inserted some number of edges with lengths less than N, and now have to choose several edges of length N. The algorithm states that we must insert these edges, if possible, before any edges with length greater than N. However, we can insert these edges in any order that we want. Also note that, no matter which edges we insert, it does not change the connectivity of the graph at all. (Let us consider two possible graphs, one with an edge from vertex A to vertex B and one without. The second graph must have A and B as part of the same connected component; otherwise the edge from A to B would have been inserted at one point.)
These two facts together imply that our answer will be the product of the number of ways, using Kruskal's algorithm, to insert the edges of length K (for each possible value of K). Since there are at most three edges of any length, the different cases can be brute-forced, and the connected components can be determined after each step as they would be normally.
Looking at Prim's algorithm, it says to repeatedly add the edge with the lowest weight. What happens if there is more than one edge with the lowest weight that can be added? Possibly choosing one may yield a different tree than when choosing another.
If you use prim's algorithm, and run it for every edge as a starting edge, and also exercise all ties you encounter. Then you'll have a Forest containing all minimum spanning trees Prim's algorithm is able to find. I don't know if that equals the forest containing all possible minimum spanning trees.
This does still come down to finding all minimum spanning trees, but I can see no simple way to determine whether a different choice would yield the same tree or not.
MST and their count in a graph are well-studied. See for instance: http://www14.informatik.tu-muenchen.de/konferenzen/Jass08/courses/1/pieper/Pieper_Paper.pdf.
sorry for posting this in programing site, but there might be many programming people who are professional in geometry, 3d geometry... so allow this.
I have been given best fitted planes with the original point data. I want to model a pyramid for this data as the data represent a pyramid. My approach of this modeling is
Finding the intersection lines (e.g. AB, CD,..etc) for each pair of adjacent plane
Then, finding the pyramid top (T) by intersecting the previously found lines as these lines don’t pass through a single point
Intersecting the available side planes with a desired horizontal plane to get the basement
In figure – black triangles are original best fitted triangles; red
and blue triangles are model triangles
I want to show that the points are well fitted for the pyramid model
than that it fitted for the given best fitted planes. (Assume original
planes are updated as shown)
Actually step 2 is done using weighted least square process. Each intersection line is assigned with a weight. Weight is proportional to the angle between normal vectors of corresponding planes. in this step, I tried to find the point which is closest to all the intersection lines i.e. point T. according to the weights, line positions might change with respect to the influence of high weight line. That mean, original planes could change little bit. So I want to show that these new positions of planes are well fitted for the original point data than original planes.
Any idea to show this? I am thinking to use RMSE and show before and after RMSE. But again I think I should use weighted RMSE as all the planes refereeing to the point T are influenced so that I should cope this as a global case rather than looking individual planes….. But I can’t figure out a way to show this. Or maybe I should use some other measure…
So, I am confused and no idea to show this.. Please help me…
If you are given the best-fit planes, why not intersect the three of them to get a single unambiguous T, then determine the lines AT, BT, and CT?
This is not a rhetorical question, by the way. Your actual question seems to be for reassurance that your procedure yields "well-fitted" results, but you have not explained or described what kind of fit you're looking for!
Unfortunately, without this information, your question cannot be answered as asked. If you describe your goals, we may be able to help you achieve them -- or, if you have not yet articulated them for yourself, that exercise may be enough to let you answer your own question...
That said, I will mention that the only difference between the planes you started with and the planes your procedure ends up with should be due to floating point error. This is because, geometrically speaking, all three lines should intersect at the same point as the planes that generated them.