Given a ILP (Integer-linear programming) optimization with n integer variables and m constraints and implementing a branch and bound tree for solving a canonical problem,
how many levels (height of tree) does tree require to reach the all-integer-optimal solution?
how many branches does the algorithm require to reach the all-integer-optimal solution?
That's a hard question to answer - both could be zero if you are really lucky, or in the worst case the "height" (or depth) could be equal to the number of integer variables n. The number of branches could be much bigger... it all depends on the problem and the solver
Related
My point is that when the pointer traverse in a linked list till n-1 position he get the value of nth easily because as we know the address of nth pointer is at n-1th location. Hence, Time Complexity Must needed to be n-1 instead of n.
any Big O notation always describes an upper bound. this means although n-1 would be more accurate to describe the complexity, it is still included in O(n).
Secondly any constant factor (-1) is reduced from the equation, when describing a Big O complexity class, because for any decent large n any constant factor does not have any noticable influence on the result.
The same as any constant factor for the n is removed. O(3n+5) is the same complexity class as O(n).
This both combined is the reason why you describe the complexity as O(n) and not O(n-1), although it might technically still be correct and more accurate.
I'm curious if you guys know an answer to this question that came to my mind.
We know that an average cost of BST insertion is O(log n) and worst case as well as average for AVL/Splay is O(log n).
Since we are inserting n times (basically we are building a tree) we will recieve the final cost being n*logn.
How can we prove that we can't get lower than that? It's easy to observe but kind of hard to prove.
Maybe we can use recursive equations and somehow limit them?
Thanks in advance.
As a hint, you can do an inorder traversal of any binary search tree in time O(n) to retrieve the elements of that BST in sorted order.
The algorithms for inserting into a red/black tree, AVL tree, splay tree, etc. are all comparison-based and work by comparing the newly-inserted element to a sequence of other elements in the tree. What would happen if you were able to do this in time o(n log n) (using little-o notation)?
This is a homework for a data structures course. I'm not asking for code, but I have hard time coming up with an effective algorithm for this :l
I have information about different family trees. Among those I have to find out the largest family and return the name of the greatest elder and number of his descendants. The descendants may have kids between them (a brother and a sister may have a kid) and this has to be done in at least O(n^2).
What would be the most effective way to solve this? I imagine having a breadth first search on graphs, but that means I have to keep up children counters for many levels upwards (if I am traversing a grand^99 children for example).
CMIIW, but my assumption is every family tree is separated from each other and the root is the eldest ancestor. If that's the case, since you're counting all tree nodes regardless, I think any unweighted graph traversal algorithm would give the same result. BFS would do the job. I don't get what you mean by "keeping up children counters for many levels upwards" though, just 1 counter is fine right?
I need to multiply long integer numbers with an arbitrary BASE of the digits using FFT in integer rings. Operands are always of length n = 2^k for some k, and the convolution vector has 2n components, therefore I need a 2n'th primitive root of unity.
I'm not particularly concerned with efficiency issues, so I don't want to use Strassen & Schönhage's algorithm - just computing basic convolution, then some carries, and that's nothing else.
Even though it seems simple to many mathematicians, my understanding of algebra is really bad, so I have lots of questions:
What are essential differences or nuances between performing the FFT in integer rings modulo 2^n + 1 (perhaps composite) and in integer FIELDS modulo some prime p?
I ask this because 2 is a (2n)th primitive root of unity in such a ring, because 2^n == -1 (mod 2^n+1). In contrast, integer field would require me to search for such a primitive root.
But maybe there are other nuances which will prevent me from using rings of such a form for the FFT.
If I picked integer rings, what are sufficient conditions for the existence of 2^n-th root of unity in this field?
All other 2^k-th roots of unity of smaller order could be obtained by squaring this root, right?..
What essential restrictions are imposed on the multiplication by the modulo of the ring? Maybe on their length, maybe on the numeric base, maybe even on the numeric types used for multiplication.
I suspect that there may be some loss of information if the coefficients of the convolution are reduced by the modulo operation. Is it true and why?.. What are general conditions that will allow me to avoid this?
Is there any possibility that just primitive-typed dynamic lists (i.e. long) will suffice for FFT vectors, their product and the convolution vector? Or should I transform the coefficients to BigInteger just in case (and what is the "case" when I really should)?
If a general answer to these question takes too long, I would be particularly satisfied by an answer under the following conditions. I've found a table of primitive roots of unity of order up to 2^30 in the field Z_70383776563201:
http://people.cis.ksu.edu/~rhowell/calculator/roots.html
So if I use 2^30th root of unity to multiply numbers of length 2^29, what are the precision/algorithmic/efficiency nuances I should consider?..
Thank you so much in advance!
I am going to award a bounty to the best answer - please consider helping out with some examples.
First, an arithmetic clue about your identity: 70383776563201 = 1 + 65550 * 2^30. And that long number is prime. There's a lot of insight into your modulus on the page How the FFT constants were found.
Here's a fact of group theory you should know. The multiplicative group of integers modulo N is the product of cyclic groups whose orders are determined by the prime factors of N. When N is prime, there's one cycle. The orders of the elements in such a cyclic group, however, are related to the prime factors of N - 1. 70383776563201 - 1 = 2^31 * 3^1 * 5^2 * 11 * 13, and the exponents give the possible orders of elements.
(1) You don't need a primitive root necessarily, you need one whose order is at least large enough. There are some probabilistic algorithms for finding elements of "high" order. They're used in cryptography for ensuring you have strong parameters for keying materials. For numbers of the form 2^n+1 specifically, they've received a lot of factoring attention and you can go look up the results.
(2) The sufficient (and necessary) condition for an element of order 2^n is illustrated by the example modulus. The condition is that some prime factor p of the modulus has to have the property that 2^n | p - 1.
(3) Loss of information only happens when elements aren't multiplicatively invertible, which isn't the case for the cyclic multiplicative group of a prime modulus. If you work in a modular ring with a composite modulus, some elements are not so invertible.
(4) If you want to use arrays of long, you'll be essentially rewriting your big-integer library.
Suppose we need to calculate two n-bit integer multiplication where
n = 2^30;
m = 2*n; p = 2^{n} + 1
Now,
w = 2, x =[w^0,w^1,...w^{m-1}] (mod p).
The issue, for each x[i], it will be too large and we cannot do w*a_i in O(1) time.
I wonder if the technique of divide and conquer always divide a problem into subproblems of same type? By same type, I mean one can implement it using a function with recursion. Can divide and conquer always be implemented by recursion?
Thanks!
"Always" is a scary word, but I can't think of a divide-and-conquer situation in which you couldn't use recursion. It is by definition that divide-and-conquer creates subproblems of the same form as the initial problem - these subproblems are continually broken down until some base case is reached, and the number of divisions correlates with the size of the input. Recursion is a natural choice for this kind of problem.
See the Wikipedia article for more good information.
A Divide-and-conquer algorithm is by definition one that can be solved by recursion. So the answer is yes.
Usually, yes! Merge sort is an example of the same. Here is an animated version of the same.
Yes. In divide and conquer algorithmic technique we divide the given bigger problem into smaller sub-problems. These smaller sub-problems must be similar to the bigger problem except that these are smaller in size.
For example, the problem of sorting an array of size N is no different from the problem of sorting an array of size N/2. Except that the latter problem size is smaller than that of former one.
If the smaller sub-problem is not similar to the bigger one, then the divide and conquer technique can not be used to solve the bigger problem. In other words, a given problem can be solved using divide and conquer technique only iff the given bigger problem can be divided into smaller sub problems which are similar to the bigger problem.
Examining merge sort algorithm will be enough for this question. After understanding implementation of merge sort algorithm with divide and conquer (also recursion) you will see how difficult it would be making it without recursion.
Actually the most important thing here is the complexity of the algorithm which is expressed with big-oh notation and nlogn for merge sort.
For mergesort exapmle there is another version which is called bottom-up merge sort. It is simple and non-recursive version of it.
it is about 10% slower than recursive, top-down mergesort on typical systems. You can refer to following link for more information. It is explained well in 3rd lecture.
https://www.coursera.org/learn/introduction-to-algorithms#
Recursion is a programming method where you define a function in terms of itself. The function generally calls itself with slightly modified parameters (in order to converge).
Divide the problem into two or more smaller subproblems.
Conquer the subproblems by solving them (recursively).
Combine the solutions to the subproblems into the solutions for the original problem.
Yes All Divide and Conquer always be implemented using recursion .
A typical Divide and Conquer algorithm solves a problem using following three steps.
Divide: Break the given problem into sub-problems of same type.
Conquer: Recursively solve these sub-problems
Combine: Appropriately combine the answers
Following are some standard algorithms that are Divide and Conquer algorithms.
1) Binary search,
2) Quick Sort,
3) Merge Sort,
4) Strassen’s Algorithm
Imagine P is a problem with size of n and S is the solution. In this case, if P is large enough to be divided into sub problem, for example P1, P2, P3, P4, ... , Pk; let say k sub problems and also there would be k solutions for each of k sub problems, like S1, S2, S3, ... , Sk; Now if we combine each solutions of sub problem together we can get the S result. In divide and conquer strategy what ever is the main problem all sube problems must be same. For example if P is sort then the P1, P2 and Pn must be sort too. So this is how it is recursive in nature. So, divide and conqure will be recursive.