Solving equations in R similar to the Excel solver parameters function - r

I have a question concerning the possibility to solve functions in R, and doing the same using excel.
However I want to do it with R to show that R is better for my colleagues :)
Here is the equation:
f0<-1e-9
t_pw<-30e-9
a<-30.7397582453682
c<-6.60935546184612
P<-1-exp((-t_pw)*f0*exp(-a*(1-b/c)^2))
I want to find the b value for P<-0.5. In Excel we can do it by selecting P value column and setting it to 0.5 and then by using the solver parameters function.
I don't know which method is the best? Or any other way to do it?
Thankx.

I have a strong suspicion that your equation was supposed to include -t_pw/f0, not -t_pw*f0, and that t_pw was supposed to be 3.0e-9, not 30e-9.
Pfun <- function(b,f0=1e-9,t_pw=3.0e-9,
a=30.7397582453682,
c=6.60935546184612) {
1-exp((-t_pw)/f0*exp(-a*(1-b/c)^2))
}
Then #Lyzander's uniroot() suggestion works fine:
u1 <- uniroot(function(x) Pfun(x)-0.5,c(6,10))
The estimated value here is 8.05.
par(las=1,bty="l")
curve(Pfun,from=0,to=10,xname="b")
abline(h=0.5,lty=2)
abline(v=u1$root,lty=3)

If you want to solve an equation the simplest thing is to do is to use uniroot which is in base-R.
f0<-1e-9
t_pw<-30e-9
a<-30.7397582453682
c<-6.60935546184612
func <- function(b) {
1-exp((-t_pw)*f0*exp(-a*(1-b/c)^2)) - 0.5
}
#interval is the range of values of b to look for a solution
#it can be -Inf, Inf
> uniroot(func, interval=c(-1000, 1000), extendInt='yes')
Error in uniroot(func, interval = c(-1000, 1000), extendInt = "yes") :
no sign change found in 1000 iterations
As you see above my unitroot function fails. This is because there is no single solution to your equation which is easy to see as well. exp(-0.0000000000030 * <positive number between 0-1>) is practically (very close to) 1 so your equation becomes 1 - 1 - 0.5 = 0 which doesn't hold. You can see the same with a plot as well:
curve(func) #same result for curve(func, from=-1000, to=1000)
In this function the result will be -0.5 for any b.
So one way to do it fast, is uniroot but probably for a different equation.
And a working example:
myfunc2 <- function(x) x - 2
> uniroot(myfunc2, interval=c(0,10))
$root
[1] 2
$f.root
[1] 0
$iter
[1] 1
$init.it
[1] NA
$estim.prec
[1] 8

Related

Is there a way to solve an equation in R?

I'm trying to write an equation in R, and then solve it. I'm fairly new to R, so it's probably a basic question, but I haven't been able to make much sense of the CRAN notes on several packages that come up with a google.
My equation:
F- b ln(|1+ (F/b)|) - 0.05t = 0
I'm trying to solve for F, and have other equations/variables in R that define b and t already.
I guess what I'm asking is, how do I translate this formula into something in R, and go about solving it for F?
Assuming b and t are scalars with known values (here we assume 1 for both) we can minimize the the square of the left hand side assuming the answer lies in the indicated interval and if it achieves zero (which it does below) we have solved it. Note that F means FALSE in R so we used FF for clarity.
fun <- function(FF, b, t) (FF - b * log(abs(1+ (FF/b))) - 0.05*t)^2
optimize(fun, c(-10, 10), b = 1, t = 1)
giving:
$minimum
[1] 0.3503927
$objective
[1] 7.525844e-12

GRG Nonlinear R

I want to transform my excel solver model into a model in R. I need to find 3 sets of coordinates which minimizes the distance to the 5 other given coordinates. I've made a program which calculates a distance matrix which outputs the minimal distance from each input to the given coordinates. I want to minimize this function by changing the input. Id est, I want to find the coordinates such that the sum of minimal distances are minimized. I tried several methods to do so, see the code below (Yes my distance matrix function might be somewhat cluncky, but this is because I had to reduce the input to 1 variable in order to run some algorithms such as nloprt (would get warnings otherwise). I've also seen some other questions (such as GRG Non-Linear Least Squares (Optimization)) but they did not change/improve the solution.
# First half of p describes x coordinates, second half the y coordinates # yes thats cluncky
p<-c(2,4,6,5,3,2) # initial points
x_given <- c(2,2.5,4,4,5)
y_given <- c(9,5,7,1,2)
f <- function(Coordinates){
# Predining
Term_1 <- NULL
Term_2 <- NULL
x <- NULL
Distance <- NULL
min_prob <- NULL
l <- length(Coordinates)
l2 <- length(x_given)
half_length <- l/2
s <- l2*half_length
Distance_Matrix <- matrix(c(rep(1,s)), nrow=half_length)
# Creating the distance matrix
for (k in 1:half_length){
for (i in 1:l2){
Term_1[i] <- (Coordinates[k]-x_given[i])^2
Term_2[i] <- (Coordinates[k+half_length]-y_given[i])^2
Distance[i] <- sqrt(Term_1[i]+Term_2[i])
Distance_Matrix[k,i] <- Distance[i]
}
}
d <- Distance_Matrix
# Find the minimum in each row, thats what we want to obtain ánd minimize
for (l in 1:nrow(d)){
min_prob[l] <- min(d[l,])
}
som<-sum(min_prob)
return(som)
}
# Minimise
sol<-optim(p,f)
x<-sol$par[1:3]
y<-sol$par[4:6]
plot(x_given,y_given)
points(x,y,pch=19)
The solution however is clearly not that optimal. I've tried to use the nloptr function, but I'm not sure which algorithm to use. Which algorithm can I use or can I use/program another function which solves this problem? Thanks in advance (and sorry for the detailed long question)
Look at the output of optim. It reached the iteration limit and had not yet converged.
> optim(p, f)
$`par`
[1] 2.501441 5.002441 5.003209 5.001237 1.995857 2.000265
$value
[1] 0.009927249
$counts
function gradient
501 NA
$convergence
[1] 1
$message
NULL
Although the result is not that different you will need to increase the number of iterations to get convergence. If that is still unacceptable then try different starting values.
> optim(p, f, control = list(maxit = 1000))
$`par`
[1] 2.502806 4.999866 5.000000 5.003009 1.999112 2.000000
$value
[1] 0.005012449
$counts
function gradient
755 NA
$convergence
[1] 0
$message
NULL

Optimization - Limits and simple constraint

I have a rather simple optimization question and while I'm fairly decent with R, optimization is something I haven't done a lot.
my.function <- function(parameters){
x <- parameters[1]
y <- parameters[2]
z <- parameters[3]
((10*x^2) - ((y/2) * (z/4)))^2
}
result <- optim(c(7,10,18),fn = my.function, method = 'L-BFGS-B',
lower = c(2,7,7),
upper = c(15,20,20))
result$par
#[1] 2.205169 19.546621 19.902243
This is a made up version of the problem I'm working on, so please forgive it if its purpose makes no sense. I have limits in place using the 'L-BFGS-B' method but I need to add a constraint and I'm unsure how to do it. My rules that I'm trying to implement are as follows:
x must be between 2 and 15
y must be between 7 and 20
z must be between 7 and 20
z <= y
It's the last one I don't know how to implement. Any help would be appreciated. Thank you.
Add a large number to the objective function if the constraint is violated, i.e. change the last line of my.function to:
((10*x^2) - ((y/2) * (z/4)))^2 + ifelse(y > z, 10^5, 0)
The result in this case is the following which does satisfy the constraint. Also, since the objective is non-negative its value cannot be less than 0 so we have achieved the minimum to numeric tolerance.
result$par
## [1] 2.223537 19.776462 20.000000
result$value
## [1] 1.256682e-11

R minimize absolute error

Here's my setup
obs1<-c(1,1,1)
obs2<-c(0,1,2)
obs3<-c(0,0,3)
absoluteError<-function(obs,x){
return(sum(abs(obs-x)))
}
Example:
> absoluteError(obs2,1)
[1] 2
For a random vector of observations, I'd like to find a minimizer, x, which minimizes the absolute error between the observation values and a vector of all x. For instance, clearly the argument that minimizes absoluteError(obs1,x) is x=1 because this results in an error of 0. How do I find a minimizer for a random vector of observations? I'd imagine this is a linear programming problem, but I've never implemented one in R before.
The median of obs is a minimizer for the absolute error. The following is a sketch of how one might try proving this:
Let the median of a set of n observations, obs, be m. Call the absolute error between obs and m f(obs,m).
Case n is odd:
Consider f(obs,m+delta) where delta is some non zero number. Suppose delta is positive - then there are (n-1)/2 +1 observations whose error is delta more than f(obs,m). The remaining (n-1)/2 observations' error is at most delta less than f(obs,m). So f(obs,m+delta)-f(obs,m)>=delta. (The same argument can be made if delta is negative.) So the median is the only minimizer in this case. Thus f(obs,m+delta)>f(obs,m) for any non zero delta so m is a minimizer for f.
Case n is even:
Basically the same logic as above, except in this case any number between the two inner most numbers in the set will be a minimizer.
I am not sure this answer is correct, and even if it is I am not sure this is what you want. Nevertheless, I am taking a stab at it.
I think you are talking about 'Least absolute deviations', a form of regression that differs from 'Least Squares'.
If so, I found this R code for solving Least absolute deviations regression:
fabs=function(beta0,x,y){
b0=beta0[1]
b1=beta0[2]
n=length(x)
llh=0
for(i in 1:n){
r2=(y[i]-b0-b1*x[i])
llh=llh + abs(r2)
}
llh
}
g=optim(c(1,1),fabs,x=x,y=y)
I found the code here:
http://www.stat.colostate.edu/~meyer/hw12ans.pdf
Assuming you are talking about Least absolute deviations, you might not be interested in the above code if you want a solution in R from scratch rather than a solution that uses optim.
The above code is for a regression line with an intercept and one slope. I modified the code as follows to handle a regression with just an intercept:
y <- c(1,1,1)
x <- 1:length(y)
fabs=function(beta0,x,y){
b0=beta0[1]
b1=0
n=length(x)
llh=0
for(i in 1:n){
r2=(y[i]-b0-b1*x[i])
llh=llh + abs(r2)
}
llh
}
# The commands to get the estimator
g = optim(c(1),fabs,x=x,y=y, method='Brent', lower = (min(y)-5), upper = (max(y)+5))
g
I was not familiar with (i.e., had not heard of) Least absolute deviations until tonight. So, hopefully my modifications are fairly reasonable.
With y <- c(1,1,1) the parameter estimate is 1 (which I think you said is the correct answer):
$par
[1] 1
$value
[1] 1.332268e-15
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
With y <- c(0,1,2) the parameter estimate is 1:
$par
[1] 1
$value
[1] 2
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
With y <- c(0,0,3) the parameter estimate is 0 (which you said is the correct answer):
$par
[1] 8.613159e-10
$value
[1] 3
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
If you want R code from scratch, there is additional R code in the file at the link above which might be helpful.
Alternatively, perhaps it might be possible to extract the relevant code from the source file.
Alternatively, perhaps someone else can provide the desired code (and correct any errors on my part) in the next 24 hours.
If you come up with code from scratch please post it as an answer as I would love to see it myself.
lad=function(x,y){
SAD = function(beta, x, y) {
return(sum(abs(y - (beta[1] + beta[2] * x))))
}
d=lm(y~x)
ans1 = optim(par=c(d$coefficients[1], d$coefficients[2]),method = "Nelder-Mead",fn=SAD, x=x, y=y)
coe=setNames(ans1$par,c("(Intercept)",substitute(x)))
fitted=setNames(ans1$par[1]+ans1$par[2]*x,c(1:length(x)))
res=setNames(y-fitted,c(1:length(x)))
results = list(coefficients=coe, fitted.values=fitted, residuals=res)
class(results)="lad"
return(results)
}

Relative Minimum via Derivative using R

I am trying to get the value of x that would minimize my equation y. I would like to use R.
The equation is:
y= [(a-bx)^2] / {[2bx /(1+x)]+c}
where a, b, c are all constant, but different to one another.
Thanks.
The standard optimize function should be sufficient for simple one-dimensional minimization:
a <- 2
b <- 1
c <-1
func <- function(x){(a-b*x)^2/((2*b*x/(1+x))+c)}
optimize(f=func, interval = c(-3,3))
$minimum
[1] -0.3333377
$objective
[1] -277201.4

Resources