Next Closest Pair Problem - math

I'm sure most are familiar with the closest pair problem, but is there another alogrithm or a way to modify the current CP algorithm to get the next closest pair?

Easy one, in O(n log(n)) :
find the closets pair (p1,p2) in O(n log(n))
compute all pairs with p1 or p2 (but not (p1,p2)) keep the closest one, let's call it E in O(n)
remove p1 and p2 in (1)
find the closets pair, compare it to E and keep the closest one, again in O(n log(n))
You now have the second closest one.

If constant number of minimal distances (next pairs) are needed, it is possible to modify steps 3-5, from Wikipedia article, redefining d_Lmin, d_Rmin, d_LRmin as constant length lists of minimum distances. That uses c*log(n) memory.
If next is used less than O(n) times, than reformulation of CR problem to return smaller distance larger than given d is same as next method. It can be implemeted with same approach as CR. I don't see theoretical guaranty that step 4 can be accomplished in linear time, but there is advantage not to check points in to small boxes.
If next is used more than O(n) times, than it is the best to calculate all distances and sort them (if memory is not a problem).

Related

Why the time complexity of linked list ( single linked list ) isn't n-1?

My point is that when the pointer traverse in a linked list till n-1 position he get the value of nth easily because as we know the address of nth pointer is at n-1th location. Hence, Time Complexity Must needed to be n-1 instead of n.
any Big O notation always describes an upper bound. this means although n-1 would be more accurate to describe the complexity, it is still included in O(n).
Secondly any constant factor (-1) is reduced from the equation, when describing a Big O complexity class, because for any decent large n any constant factor does not have any noticable influence on the result.
The same as any constant factor for the n is removed. O(3n+5) is the same complexity class as O(n).
This both combined is the reason why you describe the complexity as O(n) and not O(n-1), although it might technically still be correct and more accurate.

Dictionary and factorial of large numbers

For n queries I am given a number x and I have to print its factorial under modulo 1000000007.
def fact_eff(n, d):
if n in d:
return d[n]
else:
ans=n*fact_eff(n-1,d)
d[n]=ans
return ans
d={0:1}
n=int(input())
while(n!=0):
x=int(input())
print(fact_eff(x, d)%1000000007)
n=n-1
The problem is that x can be as large as 100000 and I receive runtime error for values greater than 3000 as maximum recursion depth exceeds. Am I missing something with the modulus operator?
Why would you use recursion in the first place to compute a simple factorial? You can check the dictionary in a loop. Or better, start at the highest valid memoized position and go higher from there, creating new entries as you go.
To save space, maybe only record n! every 32 iterations or something, so future calls need at most 31 multiplies. Still O(1) but trading some computation for huge space savings.
Also, does it work to apply the modulus before you get the final huge product? Like every few multiply steps to keep the numbers small? Or every single step if that keeps the numbers small enough for CPython's single-limb fast path. I think (x * y) % n = ((x%n) * y) % n. (But I didn't double-check that.)
If so, you could combine early modulo with sparse memoization to memoize the final modulo-reduced result.
(For numbers above 2^30, Python BigInteger multiply cost should scale with number of 2^30 chunks required to represent the number. Fortunately one of the multiplicands is always small, being the counter. Keeping the product small buys speed, but division is expensive so it's a tradeoff. And doing any more operations costs Python interpreter overhead which may simply dominate anyway until numbers get really huge.)

Graph theory, all paths with given distance

So I found a problem where a traveller can travel a certain distance in a graph and all bidirectional edges have some length(distance). Suppose when travelling a certain edge(either direction) you get some money/gift (it's given in question for all edges)so you have to find the max money you can collect for the given distance you can travel. Basic problem is how do I find all possible paths with given distance (there might be loops in graph) and after finding all possible paths, path with max money collected will simply be the answer. Note: any possible paths you come up with should not have a loop (straight path).
You are given an undirected connected graph with double weight on the edges (distance and reward).
You are given a fixed number d corresponding to a possible distance.
For each couple of nodes (u,v), u not equal to v, you are looking for
All the paths {P_j} connecting u and v with no repeating nodes whose total distance is d.
The paths {P_hat(j)} subset of {P_j} whose reward is maximal.
To get the first, I would try to use a modified version of the Floyd-Warshall algorithm, where you do not look for the shortest, but for any path.
Floyd-Warshall uses a strategy based on considering a "middle node" w between u and v and recursively finds the path minimising the distance between u and v.
You can do the same, while taking all path instead of excluding the minimisation, taking care of putting to inf the nodes where you have already b visited in the distance matrix and excluding at runtime every partial path in the recursion whose distance is longer than d or that arrives to an end (they connects u and v) and whose distance is shorter than d.
Can be generalised if an interval of possible distances [d, D] is given, instead of a single value d, as in this second case you would probably get the empty set all the time.
For the second step, you simply compare the reward of each of the path found in solving the first step, and you take the best one.
Is more a suggested direction rather than a complete answer, but I hope it helps!

What is the inverse of O(log n) time?

I'm doing some math, and today I learned that the inverse of a^n is log(n). I'm wondering if this applies to complexity. Is the inverse of superpolynomial time logarithmic time and vice versa?
I would think that the inverse of logarithmic time would be O(n^2) time.
Can you characterize the inverses of the common time complexities?
Cheers!
First, you have to define what you mean by inverse here. If you mean an inverse by composing two functions together with the linear function being the identity function, i.e. f(x)=x, then the inverse of f(x)=log x would be f(x)=10^x. However, one could define a multiplicative function inverse where the constant function f(x)=1 is the identity function, then the inverse of f(x)=x would be f(x)=1/x. While this is a bit complicated, it isn't that different than saying, "What is the inverse of 2?" and without stating an operation, this is quite difficult to answer. An additive inverse would be -2 while a multiplicative inverse would be 1/2 so there are different answers depending on which operator you want to use.
In composing functions, the key becomes what is the desired end result: Is it O(n) or O(1)? If the latter may be much more challenging in composing functions as I'm not sure if composing O(log n) with a O(1) would give you a constant in the end or if it doesn't negate the initial count. For example, consider doing a binary search for something with O(log n) time complexity and a basic print statement as something with O(1) time complexity and if you put these together, you'd still get O(log n) as there would still be log n calls within the composed function that prints a number each time going through the search.
Consider the idea of taking two different complexity functions and putting one inside the other, the overall complexity is likely to be the product of each. Consider a double for loop where each loop is O(n) complexity, the overall complexity is O(n) X O(n) = O(n^2) which would mean that in the case of finding something that cancels out the log n would be challenging as you'd have to find something with O(1/(log n)) which I'm not sure exists in reality.

Big O of Recursive Methods

I'm having difficulty determining the big O of simple recursive methods. I can't wrap my head around what happens when a method is called multiple times. I would be more specific about my areas of confusion, but at the moment I'm trying to answer some hw questions, and in lieu of not wanting to cheat, I ask that anyone responding to this post come up with a simple recursive method and provide a simple explanation of the big O of said method. (Preferably in Java... a language I'm learning.)
Thank you.
You can define the order recursively as well. For instance, let's say you have a function f. To calculate f(n) takes k steps. Now you want to calculate f(n+1). Lets say f(n+1) calls f(n) once, then f(n+1) takes k + some constant steps. Each invocation will take some constant steps extra, so this method is O(n).
Now look at another example. Lets say you implement fibonacci naively by adding the two previous results:
fib(n) = { return fib(n-1) + fib(n-2) }
Now lets say you can calculate fib(n-2) and fib(n-1) both in about k steps. To calculate fib(n) you need k+k = 2*k steps. Now lets say you want to calculate fib(n+1). So you need twice as much steps as for fib(n-1). So this seems to be O(2^N)
Admittedly, this is not very formal, but hopefully this way you can get a bit of a feel.
You might want to refer to the master theorem for finding the big O of recursive methods. Here is the wikipedia article: http://en.wikipedia.org/wiki/Master_theorem
You want to think of a recursive problem like a tree. Then, consider each level of the tree and the amount of work required. Problems will generally fall into 3 categories, root heavy (first iteration >> rest of tree), balanced (each level has equal amounts of work), leaf heavy (last iteration >> rest of tree).
Taking merge sort as an example:
define mergeSort(list toSort):
if(length of toSort <= 1):
return toSort
list left = toSort from [0, length of toSort/2)
list right = toSort from [length of toSort/2, length of toSort)
merge(mergeSort(left), mergeSort(right))
You can see that each call of mergeSort in turn calls 2 more mergeSorts of 1/2 the original length. We know that the merge procedure will take time proportional to the number of values being merged.
The recurrence relationship is then T(n) = 2*T(n/2)+O(n). The two comes from the 2 calls and the n/2 is from each call having only half the number of elements. However, at each level there are the same number of elements n which need to be merged, so the constant work at each level is O(n).
We know the work is evenly distributed (O(n) each depth) and the tree is log_2(n) deep, so the big O of the recursive function is O(n*log(n)).

Resources