I am trying to do some computations using Laplace transforms in R. I used the
continued fractions approach to compute Laplace transform of a birth-death
process as described in Abate 1999. But I cannot find a simple numerical routine to compute the inverse Laplace transform (evaluated at 0 in my case). Does anyone have ideas on how to do this in R?
Computing inverse Laplace transforms numerically is tricky. I remember seeing some relatively recent results on the ACM. Googling around a bit, I found some
Python code implementing one of these algorithms. Maybe you can adapt it to your purposes.
Related
Does anyone know which algorithm is used in Julia to perform the fast Fourier transform? The documentation only says:
...
A one-dimensional FFT computes the one-dimensional discrete Fourier transform (DFT) as defined by
\operatorname{DFT}(A)[k] =
\sum_{n=1}^{\operatorname{length}(A)}
\exp\left(-i\frac{2\pi
(n-1)(k-1)}{\operatorname{length}(A)} \right) A[n].
...
In particular, I have a discrepancy in my transformed data, i.e. this transformed data is "shifted" a phase of I think pi. Is there a convention to fix this global phase?
EDIT:
Perhaps it's worth saying that if I perform the inverse fft, then the discrepancy in the phase is corrected.
Julia uses the FFTW library, I believe, which uses several variants of the Cooley-Tukey algorithm, as described in the reference below.
http://www.fftw.org/fftw-paper-ieee.pdf
I have a mathematical optimization which I wish to solve in R consider this system/problem:
How Can I solve this problem in R?
In this model Budget, p_l for all l and mu_target are fixed constants while muis a given m-dimensional vector and R is a given n by m matrix.
I have looked into constrOptim and lp but I don't have the imagination to implement the constraints
Those functions require that I have a "constraint" matrix but my problem is that I simply don't know how to design that constraint matrix. There are not many examples with decision variables on both sides of the equations.
Have a look on the nloptr package. It has quite extensive documentation with examples. Lots of algorithms to choose from, depending what problem you are trying to resolve.
NLoptr link
This set of exercises has the student use a QP solver to solve an SVM in R. The suggested solver is the quadprog package. The quadratic problem is given as:
From the remark about the linear SVM, $K=XX'$, $K$ is a singular matrix usually, at most rank $p$ where $X$ is $n\times p$. But the solver quadprog requires a positive definite matrix, not just PSD, in the place of $K$, as mentioned many places (and verified). Any ideas what the instructor had in mind?
I think the workaround would be to add a small number (such as 1e-7) to the diagonal elements of the matrix which is supposed to be positive definite. I am not certain about the math behind it, but the sources below, as well as my experience, suggest that this solution works.
source: https://stats.stackexchange.com/questions/179900/optimizing-a-support-vector-machine-with-quadratic-programming
source: https://teazrq.github.io/stat542/hw/HW6.pdf
I'm having a hard time understanding why it would be useful to use the Taylor series for a function in order to gain an approximation of a function, instead of just using the function itself when programming. If I can tell my computer to compute e^(.1) and it will give me an exact value, why would I take an approximation instead?
Taylor series are generally not used to approximate functions. Usually, some form of minimax polynomial is used.
Taylor series converge slowly (it takes many terms to get the accuracy desired) and are inefficient (they are more accurate near the point around which they are centered and less accurate away from it). The largest use of Taylor series is likely in mathematics classes and papers, where they are useful for examining the properties of functions and for learning about calculus.
To approximate functions, minimax polynomials are often used. A minimax polynomial has the minimum possible maximum error for a particular situation (interval over which a function is to be approximated, degree available for the polynomial). There is usually no analytical solution to finding a minimax polynomial. They are found numerically, using the Remez algorithm. Minimax polynomials can be tailored to suit particular needs, such as minimizing relative error or absolute error, approximating a function over a particular interval, and so on. Minimax polynomials need fewer terms than Taylor series to get acceptable results, and they “spread” the error over the interval instead of being better in the center and worse at the ends.
When you call the exp function to compute ex, you are likely using a minimax polynomial, because somebody has done the work for you and constructed a library routine that evaluates the polynomial. For the most part, the only arithmetic computer processors can do is addition, subtraction, multiplication, and division. So other functions have to be constructed from those operations. The first three give you polynomials, and polynomials are sufficient to approximate many functions, such as sine, cosine, logarithm, and exponentiation (with some additional operations of moving things into and out of the exponent field of floating-point values). Division adds rational functions, which is useful for functions like arctangent.
For two reasons. First and foremost - most processors do not have hardware implementations of complex operations like exponentials, logarithms, etc... In such cases the programming language may provide a library function for computing those - in other words, someone used a taylor series or other approximation for you.
Second, you may have a function that not even the language supports.
I recently wanted to use lookup tables with interpolation to get an angle and then compute the sin() and cos() of that angle. Trouble is that it's a DSP with no floating point and no trigonometric functions so those two functions are really slow (software implementation). Instead I put sin(x) in the table instead of x and then used the taylor series for y=sqrt(1-x*x) to compute the cos(x) from that. This taylor series is accurate over the range I needed with only 5 terms (denominators are all powers of two!) and can be implemented in fixed point using plain C and generates code that is faster than any other approach I could think of.
Is there a linear program optimizer in R that supports upper and lower bound constraints?
The libraries limSolve and lpSolve do not support bound constraints.
It is not at all clear from the R Cran Optimization Task View page which LP optimizers support bound constraints.
Please note that all linear programming solvers assume their variables are positive. If you need different lower bounds, the easiest thing is to perform a linear transformation on the variables, apply lpSolve (or Rglpk), and retransform the variables. This has been explained in a posting to R-help some time ago -- which I am not able to find at the moment.
By the way, Rglpk has a parameter 'bounds' that allows to define upper and lower bounds through vectors, not matrices. That may attenuate your concern about matrices growing too fast.
Commands in the Rglpk package do constraints.
Or consider the General Purpose Continuous Solvers;
Package stats offers several general purpose optimization routines. First, function optim() provides an implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, bounded BFGS, conjugate gradient, Nelder-Mead, and simulated annealing (SANN) optimization methods. It utilizes gradients, if provided, for faster convergence. Typically it is used for unconstrained optimization but includes an option for box-constrained optimization.
Additionally, for minimizing a function subject to linear inequality constraints stats contains the routine constrOptim().
nlminb() offers unconstrained and constrained optimization using PORT routines.