I need to calculate 9^n where n is a natural number. I used binary exponentiation, but addition chain is not optimal. Also there exists an optimal solution, but it's proven to be NP-complete and is very hard to calculate. I cannot use lookup table in my task. Also this algorithm still doesn't use the fact that i know the basis. Maybe there are some papers in number theory, or you can suggest a better solution?
While solving competitive programming questions, sometimes it is asked to compute the final answer as
" Since this number may be large, compute it modulo 1,000,000,007 (10^9+7) ".
Also, it is the fact that in python3 plain int type is unbounded.
So, it is necessary to compute modulo 10^9+7 if I am solving my programming question in python 3 ?
It doesn't matter what anyone here believes is fair or warranted. What matters is what the automated system or human in charge expects as an answer.
Of course, since doing modular operations can slow down a possible solution within a given time frame, it would be best to find an optimum alternative, if permitted.
Suppose that a O(n2)-time alpha-approximate algorithm exists for one of the two problems in each of the following pairs:
Vertex Cover and Independent Set
Independent Set and Clique
Max-Flow and Min-Cut
Does this guarantee that a O(n2)-time alpha-approximate algorithm exists for the other problem in the pair? I know that Clique reduces to Independent Set which in turn reduces to Vertex Cover.
Not necessarily, for two reasons.
First of all, NP reductions are generally not linear in complexity. Some of them are, but usually a problem of complexity n will reduce to some other NP problem of size n^3 or something. Even if we found a linear-time 3SAT algorithm, we wouldn't have found linear-time algorithms for all NP-hard problems -- just polynomial algorithms. So if by "similar" you mean "also n^2", not in general.
Secondly, approximations don't generally transfer. Because of the non-linear growth in complexity (that's a simplification of why, but it'll do), approximation guarantees generally don't survive the reduction process. As a result, while all NP-complete problems are in a sense comrades in exact solution hardness, they are far from it in approximation hardness.
In certain specific cases, approximations do transfer (and one of your examples -- left as an exercise for the reader -- most definitely transfers). But it's in no way guaranteed.
Or rather, what is the definition of a combinatorial algorithm and a linear algorithm, resp.?
To make it clear because obviously the first responders misunderstood the question: I am not looking for a definition of an algorithm running in linear time vs non-linear time. A linear algorithm is somehow related to linear programming, which is a technique for finding or approximating solutions to linear optimization problems.
Since NP-hard problems are so hard, there is a whole field trying to find approximate solutions. The traveling salesman problem for instance has several approximate solutions which run in polynomial time and produce a solution which is within a given bound of the best solution.
Some of these approximating algorithms are called a linear algorithm, others a combinatorial algorithm; and the latter seems to be preferred (Why?). These are the two concepts I would like to understand.
The issue is one of problem formulation.
Just as you said Traveling Salesperson Problem (TSP) is NP-hard precisely because it has a discrete problem formulation (the salesperson either visits a city or not at a particular time). This discrete formulation makes the problem, and it's algorithm, combinatorial. (Note that not all combinatorial problems are NP-hard; consider sorting algorithms.)
However, the Linear-Programming (LP) relaxation of TSP results in a linear algorithm. This is because the problem has been reformulated such that the salesperson visits a city a certain proportion of the time. The main reason for using an LP relaxation is because the relaxed version can be solved in polynomial time. However, the solution to the LP relaxation is not necessarily a solution to the original problem.
A linear algorithm tends to work with just one set of data - 'Take all the numbers in set a, double them, and put the result in set b'. The number of operations is equal to the count of items in set a
A combinatorial one works on combinations of sets - 'For each number in set a, work out the sum of that number and each number in set b and print to screen'. The number of operations is the product of the size of set a and the size of set b.
Combinatorial algorithms "explode" as their input grows. Linear algorithms grows proportional to their input, while combinatorial algorithms grows proportional to an exponent (or worse) or their input: enumerating all possible paths through a graph, for example.
I was glancing through the contents of Concrete Maths online. I had at least heard most of the functions and tricks mentioned but there is a whole section on Special Numbers. These numbers include Stirling Numbers, Eulerian Numbers, Harmonic Numbers so on. Now I have never encountered any of these weird numbers. How do they aid in computational problems? Where are they generally used?
Harmonic Numbers appear almost everywhere! Musical Harmonies, analysis of Quicksort...
Stirling Numbers (first and second kind) arise in a variety of combinatorics and partitioning problems.
Eulerian Numbers also occur several places, most notably in permutations and coefficients of polylogarithm functions.
A lot of the numbers you mentioned are used in the analysis of algorithms. You may not have these numbers in your code, but you'll need them if you want to estimate how long it will take for your code to run. You might see them in your code too. Some of these numbers are related to combinatorics, counting how many ways something can happen.
Sometimes it's not enough to know how many possibilities there are because you need to enumerate over the possibilities. Volume 4 of Knuth's TAOCP, in progress, gives the algorithms you need.
Here's an example of using Fibonacci numbers as part of a numerical integration problem.
Harmonic numbers are a discrete analog of logarithms and so they come up in difference equations just like logs come up in differential equations. Here's an example of physical applications of harmonic means, related to harmonic numbers. See the book Gamma for many examples of harmonic numbers in action, especially the chapter "It's a harmonic world."
These special numbers can help out in computational problems in many ways. For example:
You want to find out when your program to compute the GCD of 2 numbers is going to take the longest amount of time: Try 2 consecutive Fibonacci Numbers.
You want to have a rough estimate of the factorial of a large number, but your factorial program is taking too long: Use Stirling's Approximation.
You're testing for prime numbers, but for some numbers you always get the wrong answer: It could be you're using Fermat's Prime test, in which case the Carmicheal numbers are your culprits.
The most common general case I can think of is in looping. Most of the time you specify a loop using a (start;stop;step) type of syntax, in which case it may be possible to reduce the execution time by using properties of the numbers involved.
For example, summing up all the numbers from 1 to n when n is large in a loop is definitely slower than using the identity sum = n*(n + 1)/2.
There are a large number of examples like these. Many of them are in cryptography, where the security of information systems sometimes depends on tricks like these. They can also help you with performance issues, memory issues, because when you know the formula, you may find a faster/more efficient way to compute other things -- things that you actually care about.
For more information, check out wikipedia, or simply try out Project Euler. You'll start finding patterns pretty fast.
Most of these numbers count certain kinds of discrete structures (for instance, Stirling Numbers count Subsets and Cycles). Such structures, and hence these sequences, implicitly arise in the analysis of algorithms.
There is an extensive list at OEIS that lists almost all sequences that appear in Concrete Math. A short summary from that list:
Golomb's Sequence
Binomial Coefficients
Rencontres Numbers
Stirling Numbers
Eulerian Numbers
Hyperfactorials
Genocchi Numbers
You can browse the OEIS pages for the respective sequences to get detailed information about the "properties" of these sequences (though not exactly applications, if that's what you're most interested in).
Also, if you want to see real-life uses of these sequences in analysis of algorithms, flip through the index of Knuth's Art of Computer Programming, and you'll find many references to "applications" of these sequences. John D. Cook already mentioned applications of Fibonacci & Harmonic numbers; here are some more examples:
Stirling Cycle Numbers arise in the analysis of the standard algorithm that finds the maximum element of an array (TAOCP Sec. 1.2.10): How many times must the current maximum value be updated when finding the maximum value? It turns out that the probability that the maximum will need to be updated k times when finding a maximum in an array of n elements is p[n][k] = StirlingCycle[n, k+1]/n!. From this, we can derive that on the average, approximately Log(n) updates will be necessary.
Genocchi Numbers arise in connection with counting the number of BDDs that are "thin" (TAOCP 7.1.4 Exercise 174).
Not necessarily a magic number from the reference you mentioned, but nonetheless --
0x5f3759df
-- the notorious magic number used to calculate inverse square root of a number by giving a good first estimate to Newton's Approximation of Roots, often attributed to the work of John Carmack - more info here.
Not programming related, huh? :)
Is this directly programming related? Surely related, but I don't know how closely.
Special numbers, such as e, pi, etc., come up all over the place. I don't think that anyone would argue about these two. The Golden_ratio also appears with amazing frequency, in everything from art to other special numbers themselves (look at the ratio between successive Fibonacci numbers.)
Various sequences and families of numbers also appear in many places in mathematics and therefore, in programming too. A beautiful place to look is the Encyclopedia of integer sequences.
I'll suggest this is an experience thing. For example, when I took linear algebra, many, many years ago, I learned about the eigenvalues and eigenvectors of a matrix. I'll admit that I did not at all appreciate the significance of eigenvalues/eigenvectors until I saw them in use in a variety of places. In statistics, in terms of what they tell you about uncertainty of an estimate from a covariance matrix, the size and shape of a confidence ellipse, in terms of principal component analysis, or the long term state of a Markov process. In numerical methods, where they tell you about convergence of a method, be it in optimization or an ODE solver. In mechanical engineering, where you see them as principal stresses and strains.
Discussion in Reddit