I'd like to minimize several differences. For one difference, this seems straight forward:
target1 <- 1.887
data <- seq(0,1, by=.001)
#Step 1
somefunction <- function(dat, target1, X){
#some random function...
t <- sum(dat)
y <- t * X
#minimize this difference
diff <- target1-y
return(diff)
}
V1 <- optimize(f = somefunction,
interval = c(0,1),
dat=data,
target1=target,
maximum = T)
V1$maximum
6.610696e-05
#--> This value for `X` should minimize the difference...
V1$maximum * sum(data)
#0.03308653
#--> as close to zero we get
Now, I'd like to minimize several differences in one step relying on optim but this does not work properly:
#Step 2
set.seed(1)
data2 <- data.frame(dat1=seq(0,1, by=.01),
dat2=runif(101),
dat3=runif(101))
somefunction_general <- function(dat, target1, target2, target3, X){
#some random function...
y <- sum(dat[,1]) * X[1]
y1 <- sum(dat[,2]) * X[2]
y2 <- sum(dat[,3]) * X[3]
#minimize these differences...
diff1 <- target1-y
diff2 <- target2-y1
diff3 <- target3-y2
#almost certain that this is wrong...
vtr <- sum(abs(diff1), abs(diff2), abs(diff3))
return(vtr)
}
V2 <- optim(par=c(1,1,1),
fn = somefunction_general,
dat=data2,
target1=1.8,
target2=2,
target3=4,
control = list(fnscale = -1))
sum(data2[,1])
[1] 50.5
sum(data2[,2])
[1] 44.27654
sum(data2[,3])
[1] 51.73668
V2$par[1]*sum(data2[,1])
#[1] 1.469199e+45
V2$par[2]*sum(data2[,2])
#[1] 1.128977e+45
V2$par[3]*sum(data2[,3])
[1] 2.923681e+45
Looks like there's some disagreement between the first function and the second? In the first function, you're returning target1-sum(dat)*X and then trying to find the maximum over X values in [0, 1].
But since you're returning the raw difference and not the absolute value, you're actually just maximizing -sum(dat)*X, or, equivalently, minimizing sum(dat)*X. Since the dat is constant, naturally the optimize function is going to return the smallest value on the interval each time (0 in the example).
For the first function, I think what you want to do is return the absolute value of the difference and then find the minimum and not the maximum. The fix for the second function, somefunction_general, is even simpler, since you're already returning sum(abs(diff1), abs(diff2), abs(diff3)): just make sure the minimum is returned by getting rid of control = list(fnscale = -1)
V2 <- optim(par=c(1,1,1),
fn = somefunction_general,
dat=data2,
target1=1.8,
target2=2,
target3=4)
V2$par
[1] 0.03564358 0.03837754 0.07748929
You should write a function such that whether there is one parameter or more, optim should work on it:
somefunction_general <- function(X, dat, target){
dat <- as.matrix(dat)
y <- colSums(dat) * X
sum((target-y)^2) # Often use the MSE
}
let us test this
data2 <- data.frame(dat1=seq(0,1, by=.01),
dat2=runif(101),
dat3=runif(101))
data <- seq(0,1, by=.001)
(a <-optim(0,somefunction_general,dat = data,target = 1.887,method = "BFGS"))
$par
[1] 0.00377023
$value
[1] 3.64651e-28
$counts
function gradient
25 3
$convergence
[1] 0
$message
NULL
We can not that the function value is zero. thus the parameter a$par is what we want. check this out
a$par*sum(data)
[1] 1.887
We can also have 3 parameters 1 target eg:
(b<-optim(c(0,0,0),somefunction_general,dat = data2,target = 1.887))
$par
[1] 0.03736837 0.04262253 0.03647203
$value
[1] 4.579334e-08
$counts
function gradient
100 NA
$convergence
[1] 0
$message
NULL
b$par*colSums(data2)
dat1 dat2 dat3
1.887103 1.887178 1.886942
Each almost got to the target of 1.887. note that this is similar to running the first one 3 times.
lastly:
(d<-optim(c(0,0,0),somefunction_general,dat = data2,target = c(1.8, 2, 4)))
$par
[1] 0.03564672 0.04516916 0.07730660
$value
[1] 2.004725e-07
$counts
function gradient
88 NA
$convergence
[1] 0
$message
NULL
the target was achieved:
d$par*colSums(data2)
dat1 dat2 dat3
1.800160 1.999934 3.999587
This one function can work on n dimensions. please use the method BFGS unless it does not converge.
What if there is one parameter with three targets? well this is quite difficult. Unless there is such a parameter, then it wont converge.
suppose we say the parameter is 0.01, what is the target?
colSums(data2)*0.01
dat1 dat2 dat3
0.5050000 0.4427654 0.5173668
Okay, suppose we were given this target, can we get the 0.01 back?
(e<-optim(10,somefunction_general,dat = data2,target = c(0.505, 0.4427654, 0.5173668),method = "BFGS"))
$par
[1] 0.01
$value
[1] 7.485697e-16
$counts
function gradient
12 3
$convergence
[1] 0
$message
NULL
Huh, we were able to converge. this is because there was a parameter that could take us there. note that i did change the starting point to 10.
Related
I am using a simple polynomial to fit a curve.
poly <- function(a, b, c, x) a * x^2 + b * x + c
I'd like to find the value of x that results in the maximum value of the curve. Currently I create a grid with a range of x from 20000 to 50000, run the function for each row, then use max() on the result. It works, but I have a lot of groups and it creates a big dataframe every time I do it. It is very clunky and I feel like there must be a better way.
Some typical coefficients are:
a <- -0.000000179
b <- 0.011153167
c <- 9.896420781
If you rearrange your function so the variable you want to maximize is first and you set the default values like so:
poly <- function(x, a, b, c) a * x^2 + b * x + c
formals(poly)$a <- -0.000000179
formals(poly)$b <- 0.011153167
formals(poly)$c <- 9.896420781
Then you can use the optimize function to maximize over your interval:
optimize(poly, c(20000, 50000), maximum = T)
$`maximum`
[1] 31154.1
$objective
[1] 183.6298
Where $maximum is the x value at which the maximum occurs and $objective is the height.
If a is negative, maximum of parabola a * x^2 + b * x + c is reached at -b/(2*a) :
a<0
#[1] TRUE
-b/(2*a)
#[1] 31154.1
You could use optim. I think the other solutions answered in this thread are more appealing, but I'll write this up for completeness:
a <- -0.000000179
b <- 0.011153167
c <- 9.896420781
o <- optim(
par=list(x=0),
fn=function(x){ -poly(a,b,c,x=x) },
method="Brent",
lower=-50e3, upper=50e3
)
Output:
> o
$par
[1] 31154.1
$value
[1] -183.6298
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
I was trying to prove there that a certain function cannot go negative. As I did not manage to make the proof, and also to convince myself that it is true, I coded the function as follows:
test = function(s,t){
# s is a vector of positives reals of length d
# t is a vector of complexes of length d-1
d = length(s)
t = c(t, (1-sum(t*s[1:(d-1)]))/s[d])
modulii = abs((t+1)/(t-1))
return(max(modulii)-1)
}
# I want to minimize this test function over all
# s positive reals of length d
# t complex of length d-1.
# How can i do that ?
# simple starting points:
d = 3
s = runif(d)
t = complex(real = runif(d-1),imaginary = runif(d-1))
test(s,t) # should be positive.
How can I code an optimization routine that minimize this function with respect to :
s[1],...s[d] all non-negative reals, with s[d] strictly positive.
t[1],...,t[d-1] all complex valued.
?
I struggle with optim and complex numbers. I want to be sure that the minimum cannot be negative ;)
Define a function proj which takes a vector of length 3*d-2 and produces a list with s and t from it squaring the first d elements to form s and taking the next d-1 and following d-1 elements as the real and imaginary parts of t. Then define f to run proj and pass the result to test and run.
d <- 3
proj <- function(x) {
d <- (length(x) + 2) / 3
list(s = head(x, d)^2, t = complex(x[seq(d+1, length = d-1)], tail(x, d-1)))
}
f <- function(x) with(proj(x), test(s, t))
result <- optim(rep(0.5, 3*d-2), f)
result
## $par
## [1] 1.0863555573 5.9011341467 -0.0009866435 -0.1252050359 1.0720624611
## [6] -0.3826544395 -6.2322265938
##
## $value
## [1] 8.911303e-09
##
## $counts
## function gradient
## 188 NA
##
## $convergence
## [1] 0
##
## $message
## NULL
proj(result$par)
## $s
## [1] 1.180168e+00 3.482338e+01 9.734655e-07
##
## $t
## [1] -0.3826544+0i -6.2322266+0i
How can I find out the minimum value that can be returned by a function with infinite domain in R?
f <- function(x) { x^2-1 }
print(minVal(f)) # -1
f <- function(x) { x^2+1 }
print(minVal(f)) # 1
So far I've tried optimize, but it requires a finite interval:
minVal <- function(f) {
optimize(f, c(-100, 100))
}
Have you tried the nlm function?
nlm(function(x) x^2 -1, p = 1E3)
$minimum
[1] -1
$estimate
[1] -2.499974e-10
$gradient
[1] 9.995338e-07
$code
[1] 1
$iterations
[1] 1
p is a starting value for minimization that is required.
As well as nlm, you could try
optim(fn=function(x) x^2 -1, par = 1, method="BFGS")
This happens to work with a starting value of 1 or 1000 (although numerical accuracy can almost never be guaranteed for general nonlinear minimization).
This will work even better if you specify the gradient explicitly:
optim(fn=function(x) x^2 -1, gr=function(x) 2*x, par = 1, method="BFGS")
Based on some quick experiment, this seems to give the correct answer for starting values between -10^7 and 10^7.
I am trying to use a HoltWinters prediction iteratively along a vector, without the use of a loop, but don't want the HoltWinters function used on the first two. I have created a list of vectors using accumulate:
library(purrr)
v <- c(73,77,71,73,86,87,90)
cumv <- accumulate(v,c)
Using map across cumv:
# Omit first two
hw1 <- map(cumv[-c(1:2)], function(x) HoltWinters(ts(x),gamma=F,alpha=0.35,beta=0.2))
> hw1[[5]]
#Holt-Winters exponential smoothing with trend and without seasonal component.
#Call:
#HoltWinters(x = ts(x), alpha = 0.35, beta = 0.2, gamma = F)
#Smoothing parameters:
# alpha: 0.35
# beta : 0.2
# gamma: FALSE
#Coefficients:
# [,1]
#a 89.605082
#b 3.246215
This
gives my desired result but doesn't include the first two iterations. I assumed using ifelse would work fine:
# Include first two, use ifelse
hw2 <- map(cumv, function(x) ifelse(length(x)>2,HoltWinters(ts(x),gamma=F,alpha=0.35,beta=0.2),
ifelse(length(x)>1,max(x),NA)))
Now, hw2[[7]] should have (I thought) returned an identical object to hw1[[5]] but it doesn't.
> hw2[[7]]
#[[1]]
#Time Series:
#Start = 3
#End = 7
#Frequency = 1
# xhat level trend
#3 81.00000 77.00000 4.000000
#4 80.80000 77.50000 3.300000
#5 80.82400 78.07000 2.754000
#6 85.75192 82.63560 3.116320
#7 89.39243 86.18875 3.203686
Why is it getting messed up?
As Dason mentioned in their comment, the ifelse() function is not the same as using if else statements. The former returns a single value for each element of x, assuming x is a vector containing booleans, e.g.
x <- c(TRUE, TRUE, FALSE, FALSE)
ifelse (x, "A", "B")
returns [1] "A" "A" "B" "B"
For your purpose, you want to use a normal if else construct:
hw2 <- map(cumv, function(x) {
if (length(x) > 2) {
return (HoltWinters(ts(x),gamma=F,alpha=0.35,beta=0.2))
} else if (length(x) > 1) {
return (max(x))
} else {
return (NA)
}
})
I am attempting to optimize (minimize) a function with two parameters that should have a unique solution.
foo <- function(x) {
x1 <- x[1]
x2 <- x[2]
t=5-sqrt((0-x1)^2+(0-x2)^2);
u=4-sqrt((0-x1)^2+(4-x2)^2);
v=3-sqrt((3-x1)^2+(0-x2)^2);
return(sum(t,u,v))
}
optim(c(0,0), foo)
Those of you who loved middle school math may recognize the formula for the distance between two points on a Cartesian (x,y) coordinate plane. The function foo is written so that x1 is the x coordinate and x2 is the y coordinate of a point I'm trying to find. In this case, that point is (3,4). However, I'm getting a whacky output:
optim(c(0,0), foo)
$par
[1] -3.938866e+54 1.293779e+54
$value
[1] -1.243772e+55
$counts
function gradient
501 NA
$convergence
[1] 1
$message
NULL
Any idea what's going wrong?
As is, your function has no minimum, since t, u and v decrease as x1 and x2 go farther away from the three fixed points (0,0), (0,4) and (3,0).
It seems that you want to find a point (x1,x2) that minimizes the distance to those three points. If that's the case, you should define f as:
foo <- function(x) {
x1 <- x[1]
x2 <- x[2]
t = 5 + sqrt((0-x1)^2+(0-x2)^2);
u = 4 + sqrt((0-x1)^2+(4-x2)^2);
v = 3 + sqrt((3-x1)^2+(0-x2)^2);
return(sum(t,u,v))
}
Note that the only change is -sqrt to +sqrt.
The answer is not point (3,4) as you expect, the minimum is a point inside the triangle that have those three points as vertex.
Solution:
> z
$par
[1] 0.7510095 0.6954136
$value
[1] 18.76643
$counts
function gradient
59 NA
$convergence
[1] 0
$message
NULL
The red point is the solution:
The main issue was that the 3 and 4 were flipped. The other was that your differences were signed. But also the routine can converge to a local min which is not the absolute min. Instead:
foo <- function(x) {
t=5-sqrt((0-x[1])^2+(0-x[2])^2);
u=3-sqrt((0-x[1])^2+(4-x[2])^2);
v=4-sqrt((3-x[1])^2+(0-x[2])^2);
return(sum(abs(t),abs(u),abs(v)))
}
optim(c(4,0), foo)