Why is nlogn so hard to invert? - r

Let's say I have a function that is nlogn in space requirements, I want to work out the maximum size of input for that function for a given available space. i.e. I want to find n where nlogn=c.
I followed an approach to calculate n, that looks like this in R:
step = function(R, z) { log(log(R)-z)}
guess = function(R) log(log(R))
inverse_nlogn = function(R, accuracy=1e-10) {
zi_1 = 0
z = guess(R)
while(abs(z - zi_1)>accuracy) {
zi_1 = z
z = step(R, z)
}
exp(exp(z))
}
But I can't get understand why it must be solved iteratively. For the range we are interested (n>1), the function is non singular.

There's nothing special about n log n — nearly all elementary functions fail to have elementary inverses, and so have to be solved by some other means: bisection, Newton's method, Lagrange inversion theorem, series reversion, Lambert W function...

As Gareth hinted the Lambert W function (eg here) gets you almost there, indeed n = c/W(c)
A wee google found this, which might be helpful.

Following up (being completely explicit):
library(emdbook)
n <- 2.5
c <- 2.5*log(2.5)
exp(lambertW(c)) ## 2.5
library(gsl)
exp(lambert_W0(c)) ## 2.5
There are probably minor differences in speed, accuracy, etc. of the two implementations. I haven't tested/benchmarked them extensively. (Now that I tried
library(sos)
findFn("lambert W")
I discover that it's implemented all over the place: the games package, and a whole package that's called LambertW ...

Related

What is going on with floating point precision here?

This question is in reference is an observation from a code-golf challenge.
The submitted R solution is a working solution, but a few of us (maybe just I) seems to be dumbfounded as to why the initial X=m reassignment is necessary.
The code is golfed down a bit by #Giuseppe, so I'll write a few comments for the reader.
function(m){
X=m
# Re-assign input m as X
while(any(X-(X=X%*%m))) 0
# Instead of doing the meat of the calculation in the code block after `while`
# OP exploited its infinite looping properties to perform the
# calculations within the condition check.
# `-` here is an abuse of inequality check and relies on `any` to coerce
# the numeric to logical. See `as.logical(.Machine$double.xmin)`
# The code basically multiplies the matrix `X` with the starting matrix `m`
# Until the condition is met: X == X%*%m
X
# Return result
}
Well as far as I can tell. Multiplying X%*%m is equivalent to X%*%X since X is a just an iteratively self-multiplied version of m. Once the matrix has converged, multiplying additional copies of m or X does not change its value. See linear algebra textbook or v(m)%*%v(m)%*%v(m)%*%v(m)%*%v(m)%*%m%*%m after defining the above function as v. Fun right?
So the question is, why does #CodesInChaos's implementation of this idea not work?
function(m){while(any(m!=(m=m%*%m)))0 m}
Is this caused by a floating point precision issue? Or is this caused by the a function in the code such as the inequality check or .Primitive("any")? I do not believe this is caused by as.logical since R seems to coerce errors smaller than .Machine$double.xmin to 0.
Here is a demonstration of above. We are simply looping and taking the difference between m and m%*%m. This error becomes 0 as we try to converge the stochastic matrix. It seems to converge then blow to 0/INF eventually depending on the input.
mat = matrix(c(7/10, 4/10, 3/10, 6/10), 2, 2, byrow = T)
m = mat
for (i in 1:25) {
m = m%*%m
cat("Mean Error:", mean(m-(m=m%*%m)),
"\n Float to Logical:", as.logical(m-(m=m%*%m)),
"\n iter", i, "\n")
}
Some additional thoughts on why this is a floating point math issue
1) the loop indicates that this is probably not a problem with any or any logical check/conversion step but rather something to do with float matrix math.
2) #user202729's comment in the original thread that this issue persists in Jelly, a code golf language gives more credence to the idea that this is a perhaps a floating point issue.
The different methods iterate different functions, both starting with seed value m. Function iteration only converges to a given fixed point if that fixed point is stable and the seed is within the basin of attraction of that fixed point.
In the original code, you are iterating the function
f <- function(X) X %*% m
The limit matrix is a stable fixed-point under the assumption (stated in the Code Gulf problem) that a well-defined limit exists. Since the function definition depends on m, it isn't surprising that the fixed point is a function of m.
On the other hand, the proposed variation using m = m %*% m is obtained by iterating the function
g <- function(X) X %*% X
Note that all idempotent matrices are fixed points of this function but clearly they can't all be stable fixed points. Apparently, the limiting matrix in the original fixed function is not a stable fixed point of g (even though it is a fixed point).
To really nail this down, you would need to get into the theory of matrix fixed points under function iteration to show why the fixed point in the case of g is unstable.
This is indeed a floating point math issue. To see it, see the results of this function:
test2 <- function(m) {
c <- 0
res <- list()
while (any(m!=(m=m%*%m))) {
c <- c + 1
res[[c]] <- m
}
print(c)
res
}
To test equality with some tolerance, you can use:
test3 <- function(m) {
while (!isTRUE(all.equal(m, m <- m %*% m))) 0
m
}

Nonlinear equation involving summations in R

I am having the hardest time trying to implement this equation into a nonlinear solver in R. I am trying both the nleqslv and BB packages but so far getting nothing but errors. I have searched and read documentation until my eyes have bled, but I cannot wrap my brain around it. The equation itself works like this:
The Equation
s2 * sum(price^(2*x+2)) - s2.bar * sum(price^(2*x)) = 0
Where s2, s2.bar and price are known vectors of equal length.
The last attempt I tried in BB was this:
gamma = function(x){
n = len(x)
f = numeric(n)
f[n] = s2*sum(price^(2*x[n]+2)) - s2.bar*sum(price^(2*x[n]))
f
}
g0 = rnorm(length(price))
results = BBsolve(par=g0, fn=gamma)
From you description of the various parts used in the function you seem to have muddled up the formula.
Your function gamma should most probably be written as
gamma <- function(x){
f <- s2*sum(price^(2*x+2)) - s2.bar*sum(price^(2*x))
f
}
s2, price and s2.bar are vectors from your description so the formula you gave will return a vector.
Since you have not given any data we cannot test. I have tried testing with randomly generated values for s2, price, s2.bar. Sometimes one gets a solution with both nleqslv and BB but not always.
In the case of package nleqslv the default method will not always work.
Since the package has different methods you should use the function testnslv from the package to see if any of the provided methods does find a solution.

R: How to add additional constraints to DEoptim

I am trying to minimize an objective function using DEoptim, subject to a simple constraint. I am not clear as to how to add the simple constraint to the call to DEoptim. Here is the objective function:
obj_min <- function(n,in_data) {
gamma <- in_data$Gamma
delta <- in_data$Delta
theta <- in_data$Theta
gammaSum <- sum(n * gamma)
deltaSum <- sum(n * delta)
thetaSum <- sum(n * theta)
abs((EPC * gammaSum - 2 * abs(deltaSum)) / thetaSum )
}
My mapping function (to impose integer constraints) is as follows:
mappingFun <- function(x) {
x[1:length(x)] <- round(x[1:length(x)], 0)
}
My call to DEoptim is:
out <- DEoptim(DTRRR_min, lower = c(rep(-5, length(in_data[, 1]))),
upper = c(rep(5, length(in_data[, 1]))),
fnMap = mappingFun, DEoptim.control(trace = F),in_data)
My in_data object (data frame) is:
Underlying.Price Delta Gamma Theta Vega Rho Implied.Volatility
1 40.69 0.9237 3.2188 -0.7111 2.0493 0.0033 0.3119
2 40.69 0.7713 6.2267 -1.6352 4.3240 0.0032 0.3402
3 40.69 0.5822 8.4631 -2.0019 5.5782 0.0338 0.3229
4 40.69 0.3642 8.5186 -1.8403 5.3661 0.0210 0.3086
5 40.69 0.1802 6.1968 -1.2366 3.7517 0.0093 0.2966
I would like to add a simple constraint that:
sum(n * delta) = target
In other words, the summation of the optimized parameters, n, multiplied by the deltas in my in_data data frame sum to a target of some sort. For simplicity, lets just say 0.5. How do I impose
sum(n * delta) = 0.5
as a constraint? Thank you for your help!
OK, thank you for all of your suggestions. I have researched and worked through my problem from many angles, and I wanted to share my thoughts with everyone, in case they can be helpful to some of you.
Most obvious, in my particular objective function, deltaSum is a variable, and I am attempting to constrain it to a particular value. Simple substitution of this constrained value into the objective function is the solution to this (trivial). However, assuming I was to introduce a constraint on a variable which is not already a variable in the objective function, I can simply run a for loop which returns Inf for any constraint I wish to impose, ie:
obj_func_sum_RRRs <- function(n, in_data) {
#Declare deltaSum, gammaSum, thetaSum, vegaSum, and rhoSum from in_data
#Impose constraints
#No dividing by 0:
if (thetaSum == 0) {
return(Inf)
}
#Specify that regardless of the length of vector of variables to
#be optimized, we only want our final results to include either 4 or 6
#nonzero n's in our final optimized solution
if (!sum(n[1:length(n)] != 0) == 4 &
!sum(n[1:length(n)] != 0) == 6) {
return(Inf)
}
(deltaSum + gammaSum)/thetaSum
}
The first for loop, (thetaSum == 0, return Inf) works because while Inf is a solution which the optimizer understands (and will never select as optimal), division by 0 in R returns NaN, which "breaks" the optimization process. This is a bit "hacky", in that it is likely NOT the most computationally efficient way to approach the problem, but to be honest, with the infrastructure that I am developing with a close friend and software architect guru (which utilizes microservices deployed through the Microsoft Service Fabric), our long-range backtesting is still lightening quick. This methodology actually allows you to impose any number of constraints on your problem, although further testing would need to be done to see how burdensome the computational complexity could become using this technique...
The Lagrange technique above can be viable, but only if you derive an analytical form of lambda on paper, then implement in code. It is not always practical in application, and while you may be able to code up an algorithm to optimize the parameter, it sounds like a bad idea to paint yourself into a corner where you have to optimize a parameter which is, in turn, necessary to the optimizing of the original objective function. Just setting a for loop as advised above seems the better way to go.
Food for thought....
DEOptim package description says
Implements the differential evolution algorithm for global
optimization of a realvalued function of a real-valued parameter
vector.
The concept of global optimization doesn't have place for constraints and it is also known as unconstrained optimization. So sorry but its not possible directly. Having said that you can always use "Lagrange's multiplier" hack if you must do it. To do it you need to do something like:
abs((EPC * gammaSum - 2 * abs(deltaSum))/thetaSum) - lambda* (sum(n * delta) - 0.5)
where you penalizing slack of your constraint.
I am using a wrapper which customises the call of DEoptim based on external constraints. Not very elegant I admit it but it works to some extent.
My objective function - a Monte Carlo simulation - is quite time consuming
so constraints are really helpful...
Chris
Due to the very specific character of what I am doing (Monte Carlo raytracing for the optimisation of neutron beam optics) I did not see any reason to add code. I think it is really the concept what matters here. I'll gladly share what I have with anybody interested. Just let me know.... Chris

Harmonic series sum function in R

I am trying to write a function which takes a positive real number and keeps adding terms of the harmonic series until the total sum exceeds the initial argument.
I need my function to display the total number of terms of the series that were added.
Here's my code so far:
harmonic<-function(n){
x<-c(0,1)
while (length(x) < n) {
position <- length(x)
new <- 1/(x[position] + x[position-1])
x <- c(x,new)
}
return(x)
}
I apologise for the errors in my code, unfortunately I have been working with R only for a month and this is the first time that I am using the while loop and I couldn't find any useful information around.
Thank you, I'd really appreciate your help.
Here's an attempt based on some info from this post at maths.stackexchange: https://math.stackexchange.com/q/496116
I can't speak as to whether it is highly accurate in all circumstances or even the best or an appropriate way to go about this. Caveat emptor.
harmsum.cnt <- function(x,tol=1e-09) {
em.cons <- 0.577215664901533
difffun <- function(x,n) x - (log(n) + em.cons + 1/(2*n) - 1/(12*n^2))
ceiling(uniroot(difffun, c(1, 1e10), tol = tol, x = x)$root)
}
Seems to work alright though:
harmsum.cnt(7)
#[1] 616
harmsum.cnt(15)
#[1] 1835421
Compare:
tail(cumsum(1/1:616),1); tail(cumsum(1/1:615),1)
#7.001274
#6.999651
dput(tail(cumsum(1/1:1835421),1)); dput(tail(cumsum(1/1:1835420),1))
#15.0000003782678
#14.9999998334336
This is a partial answer, which I'll try to fill in later. On the assumption that you want an exact answer, rather than the excellent approximation formula thelatemail found, there are a few tools to consider.
First, use of a hash-table or memoise methods will allow you to save previous calculations, thus saving a lot of time.
Second, since the sum of a (finite) sequence is independent of the grouping, you can calculate, e.g. the first N terms and the second (N+1):2N terms independently. Use parallel package to divide and conquer.
Third, before you get too deep into the morass, check the limits of floating-point accuracy via a call to .Machine$double.eps Once your 1/n term comes close to that, you'll need to switch over to gmp and Rmpfr to get full accuracy in your calculations.
Now, just to clarify what you "should" be doing, a correct loop is
mylimit <- [pick a value]
harmsum<-0
for(k in 1:N){
harmsum <- harmsum + 1/k
if (harmsum >= mylimit) break
}
(or similar setup using while)

consistent evaluation of input parameters

Thanks to R's evaluation of function arguments, it is possible to specify a consistent set of input parameters, and have the others automagically calculated.
Consider the following function, linking the concentration, mass, volume and molar weight for a dilution in chemistry,
concentration <- function(c = m / (M*V), m = c*M*V, V = m / (M*c), M = 417.84){
cat(c("c=", c*1e6, "micro.mol/L\n",
"m=", m*1e3, "mg\n",
"M=", M, "g/mol\n",
"V=", V*1e3, "mL\n"))
## mol/L, g, g/mol, L
invisible(list(c=c, m=m, M=M, V=V))
}
Is there a way to specify only one of the equations and have R figure out the others by inversion? I realise this is limited to simple linear relationships, as the inversion cannot generally be expressed analytically.
concentration <- function(c = m / (M*V), m, V, M = 417.84){
## { magic.incantation }
## mol/L, g, g/mol, L
invisible(list(c=c, m=m, M=M, V=V))
}
You might want to look at the BB package, and in particular the function BBsolve(). BBsolve does a Newton-Raphson backsolve of the equation(s) you feed it. As it happens :-) , I wrote and published a function "ktsolve" which allows you to enter a set of equations and some subset of the variables, and it'll return the values of the other variables. (It's named in honor of the commercial TK!Solver package). If you want to try it out, you can get it at http://witthoft.com/ktsolve.R (or http://witthoft.com/rtools.html and click on the link there).

Resources