The running time of an simple recursion algrithom - recursion

I knew how to solve this but I forget after many years. Here's the problem:
T(n) is the running time of an algorithm, and T(n)=T(n-1)+T(n-2).
T(1) and T(2) takes constant running time.
What's the running time of T(n)?

It depends on how you implement it. If you store the result of T(n) in a table, you can run in linear time. If not, it will take exponential time. (Exponential in n.) (See wikipedia entry on memoization.)

Related

Time complexity of nlm-package in R?

I'm estimating a Non-Linear system (via seemingly unrelated regressions - SUR), using systemfit (nlsystemfit() function) package with 4 equations, 32 parameters to estimate (!) and 412 observations. But my code is taking forever (my laptop it's not a super-powerful one tho). So far, the process was on a 13 hours run. I'm not an expert in computational stuff, but someone explained me some time ago the concept of Time Complexity of the algorithms (or big-o), then depending on this concept the time to compute a certain algorithm could rely on specific functional relation on the number of observations and/or coefficients.
Hence, I'm thinking of just stopping my process, and trying to simplify the model (temporarily) and trying to run something simpler, only to check-up if the estimated parameters had sens so far. And then, run a whole model.
But all this has a sense if I can change key elements in my model, which can reduce the time of processing significantly. That's why I was looking on google about the time complexity of nlm-package (nlsystemfit() function relies on nlm) but unsuccessfully. So, this is my question: Anybody knows where I can find that info, or at least give me advice on how test non-linear systems before run a whole model?
Since you didn't provide any substantial information regarding your model or some code for the same, its hard to express a betterment for your situation.
From what you said:
Hence, I'm thinking of just stopping my process, and trying to simplify the model (temporarily) and trying to run something simpler, only to check-up if the estimated parameters had sens so far. And then, run a whole model.
It seems you require benchmarking or to obtain the measured time taken to execute, as in your case. (although it can deal with memory usage or some other performance metric as well)
There are quite a few ways to benchmark code in R, which include the use of Sys.time() or system.time() just before and right after your algorithm/function executes, or libraries such as rbenchmark (which is a simple wrapper around the system.time function), tictoc, bench and microbenchmark.
Among these the last two are preferable options, as bench::mark includes system_time(), a higher precision alternative to system.time() and microbenchmark is known to be a reliable source to accurately measure and compare the execution time of R expressions/algorithms.

Numerical optimization with MPI

I am trying to parallelize an optimization routine by using MPI directives. The structure of the program is roughly like in the block diagram at the end of the text. Data is fed to the optimization routine, it calls an Objective function subroutine and another subroutine, which calculates a matrix called “Jacobian”. The Optimization routine iterates as many times as needed to reach a minimum of the Objective function and exits with a result.The Jacobian is used to decide in which direction the minimum might be and to take a step in that direction. I don’t have control over the Optimization routine, I only supply the Objective function and the function calculating the Jacobian. Most of the time is spend on calculating the Jacobian. Since each matrix element of the Jacobian is independent of the rest of the elements, it seems as a good candidate for parallelization. However, I haven’t been able to accomplish this. Initially I was thinking that I can distribute the calculation of the Jacobian over a large number of nodes, each of which would calculate only some of the matrix elements. I did that but after just one iteration all the threads on the nodes exit and the program stalls. I am starting to think that without the source code of the Optimization routine this might not be possible. The reason is that distributing the code over multiple nodes and instructing them to only calculate a fraction of the Jacobian messes up the optimization on all of them, except the master. Is there a way around it, using MPI and without touching the code in the optimization routine ? Can only the function calculating the Jacobian be executed on all nodes except the master ? How would you do this ?
It turned out easier than I thought. As explained in the question, the worker threads were exiting after just one iteration. The solution is to just enclose the code in the Jacobian calculation executed by the workers with an infinite while loop and break out of it by sending a message from the main thread (master) once it exits with the answer.

Linear time algorithm for finding the greatest common divisor

I have been doing some research and I have found some algorithms that have greater than 0(N) runtime.
I am curious if anybody is aware of a linear time algorithm for finding the greatest common divisor?
If there is, noone has found it yet; from Wikipedia;
the best known deterministic algorithm is by Chor and Goldreich, which
(in the CRCW-PRAM model) can solve the problem in O(n/log n) time with
n1+ε processors.

Lowest cost path of a graph

I am working on a problem which drills down to this:
There is a connected undirected graph. I need to visit all the nodes
without visiting a node more than once. I can start and end at any
arbitrary node.
How can I go about this? Shall I apply algorithm like Floyd-Warshall to all start nodes possible or is there a better way to do?
Thanks.
A path that visits every node once and only once is called a Hamiltonian Path. The problem of finding a Hamiltonian Path is called Hamiltonian Path Problem.
First of all, this problem is NP-Complete. An algorithm whose run time is proportional to at most a polynomial of input size is called a polynomial algorithm. For example, most sorting algorithms require O(N logN) time, which is less than N^2, which makes it polynomial.
For NP-complete problems there is no known polynomial time algorithm. Although no one could prove it yet, most probably there is no polynomial time algorithm for NP-complete problems. It means:
the run time of any algorithm you will come up with will be proportional to an exponential function of input size. (i.e. if it solve the problem with 40 nodes in an hour, it will require 2 hours for 41 nodes, 4 hours for 42 nodes, ..) Which is very bad news.
The algorithm you will come up with will not be fundamentally much faster than one that proceeds with try and error.
If your input size is small, start with a simple backtracking algorithm. If you need to do better, a google search with terms like "hamiltonian path", "longest path" may provide an answer. Ultimately you will have to lower your expectations, (for example settle with an approximation instead of an optimal solution) if your input is large.

High order PDEs

I'm trying to solve a 6th-order nonlinear PDE (1D) with fixed boundary values (extended Fisher-Kolmogorov - EFK). After failing with FTCS, next attempt is MoL (either central in space or FEM) using e.g. LSODES.
How can this be implemented? Using Python/C + OpenMP so far, but need some pointers
to do this efficiently.
EFK with additional 6th order term:
u_t = d u_6x - g u_4x + u_xx + u-u^3
where d, g are real coefficients.
u(x,0) = exp(-x^2/16),
ux = 0 on boundary
domain is [0,300] and dx << 1 since i'm looking for pattern formations (subject to the values
of d, g)
I hope this is sufficient information.
All PDE solutions like this will ultimately end up being expressed using linear algebra in your program, so the trick is to figure out how to get the PDE into that form before you start coding.
Finite element methods usually begin with a weighted residual method. Non-linear equations will require a linear approximation and iterative methods like Newton-Raphson. I would recommend that you start there.
Yours is a transient solution, so you'll have to do time stepping. You can either use an explicit method and live with the small time steps that stability limits will demand or an implicit method, which will force you to do a matrix inversion at each step.
I'd do a Fourier analysis first of the linear piece to get an idea of the stability requirements.
The only term in that equation that makes it non-linear is the last one: -u^3. Have you tried starting by leaving that term off and solving the linear equation that remains?
UPDATE: Some additional thoughts prompted by comments:
I understand how important the u^3 term is. Diffusion is a 2nd order derivative w.r.t. space, so I wouldn't be so certain that a 6th order equation will follow suit. My experience with PDEs comes from branches of physics that don't have 6th order equations, so I honestly don't know what the solution might look like. I'd solve the linear problem first to get a feel for it.
As for stability and explicit methods, it's dogma that the stability limits placed on time step size makes them likely to fail, but the probability isn't 1.0. I think map reduce and cloud computing might make an explicit solution more viable than it was even 10-20 years ago. Explicit dynamics have become a mainstream way to solve difficult statics problems, because they don't require a matrix inversion.

Resources