I have the following function which i maximize using optim().
Budget = 2000
X = 4
Y = 5
min_values = c(0.3,0)
start_values = c(0.3,0.5)
max_values = c(1,1)
sample_function <- function(z,Spend){
Output = (z[1]*X*Spend) + (z[2]*Y*Spend)
return(Output)
}
MaxFunction <- optim(par=start_values ,fn= sample_function, method = "L-BFGS-B", lower = min_values , upper= max_values ,control=list(maxit=100000 ,fnscale=-1), Spend= Budget)
However i would like to add some constraints when maximizing such as:
z[1] => 1/3
and
z[1] + z[2] = 1
Any help will much be appreciated since this is linked to a more complicated problem that i'm tackling.
Or if there's a different method for solving the problem without using otpim() please let me know.
optim is not a good option for constrained optimization, but it is still possible for your case as long as you formulate your objective function sample_function in a different way.
Below is an example
min_values = 1/3
start_values = 0.5
max_values = 1
sample_function <- function(z,Spend){
z*X*Spend + (1-z)*Y*Spend
}
MaxFunction <- optim(par=start_values ,
fn= sample_function,
method = "L-BFGS-B",
lower = min_values ,
upper= max_values,
control=list(maxit=100000 ,fnscale=-1),
Spend= Budget)
If you want to see the distribution of elements of z and 1-z, you can use
z1 <- MaxFunction$par
z2 <- 1- z1
Zopt <- c(z1,z2)
such that
> Zopt
[1] 0.3333333 0.6666667
Related
I am ask to sample and then do a marginal distribution $f(x_1,x_5)$ and plot it. I have the following code which works but dnorm is used for one dimension so I was wondering if i need to change it to dmvnorm.
If so, i change mu=mu.marginal, sigma=sigma.marginal, added a y sample, but dmvnorm says error because of non array input. Anye help is appreciated.
Model of multivariable normal:
mu = c(1,2,6,2,-4)
sigma = c(1,2,1,0.5,2)
rho = diag(rep(1,5))
rho[1,2] = rho[2,1] = 0.4
rho[1,3] = rho[3,1] = -0.3
rho[1,4] = rho[4,1] = -0.7
rho[3,5] = rho[5,3] = 0.2
rho[4,5] = rho[5,4] = 0.5
Sigma = rho * (sigma %o% sigma)
my code:
p = c(1,5)
(mu.marginal = mu[p])
(Sigma.marginal = Sigma[p,p])
# p is one-dimensional: use dnorm() to compute marginal pdf
x = seq(-1,6,by=0.01)
fx = dnorm(x,mean=mu.marginal,sd=sqrt(Sigma.marginal))
ggplot(data=data.frame(x=x,y=fx),mapping=aes(x=x,y=y)) + geom_line(col="blue")
It seems to me you were on the right track with mvtnorm and came close to a solution... I'm not sure how you ran into a non-array input error, but here's what I got with using mvtnorm:
set.seed(123)
dat <- mvtnorm::rmvnorm(1e4, mean = mu.marginal, sigma = Sigma.marginal)
dat <- as.data.frame(dat)
ggplot(dat, aes(x = V1, y = V2)) +
geom_bin2d()
You can see it's fairly spherical, which is what you'd expect since the off-diagonal elements of Sigma.marginal are 0 (which means that x_1 and x_5 are marginally independently normally distributed...)
I have the following function and i need it to be maximized instead of minimized.
adbudgReturn = function(Spend,a,b,c,d){
adbudgReturn = sum(b+(a-b)*((Spend^c)/(d+(Spend^c))))
return(adbudgReturn)
}
FP_param <- c(95000,0,1.15,700000)
FB_param <- c(23111.55,0,1.15,20000)
GA_param <- c(115004,1409,1.457,2000000)
y = c(0.333333,0.333333,0.333333)
TotalSpend <- function(Budget,y){
FP_clicks = adbudgReturn(Budget * y[1], FP_param[1], FP_param[2], FP_param[3], FP_param[4])
FB_clicks = adbudgReturn(Budget * y[2], FB_param[1], FB_param[2], FB_param[3], FB_param[4])
GA_clicks = adbudgReturn(Budget * y[3], GA_param[1], GA_param[2], GA_param[3], GA_param[4])
return(total = FP_clicks + FB_clicks + GA_clicks)
}
startValVec = c(0.33333,0.333333,0.3333333)
minValVec = c(0,0.2,0)
maxValVec = c(0.8,1,08)
MaxClicks_optim.parms <- nlminb(objective = TotalSpend,start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 10000)
I have tried adding the minus sign in front of the nlminb function i.e:
-nlminb(..)
but without any success. Any help will be appreciated.
Also i would like to add constraints so the sum of the maxValVec = 1
Other optimization functions in R such as optim() have a built-in fnscale control parameter you can use to switch from minimization to maximization (i.e. optim(..., control=list(fnscale=-1)), but nlminb doesn't appear to. So you either need to flip the sign in your original objective function, or (possibly more transparently) make a wrapper function that inverts the sign, e.g.
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
[ .... everything else as before ... ] )
Note that the ... in the max_obj() definition are literal. The only part of the solution above that needs to be filled in is the [.... everything else as a before ...] part. To be absolutely explicit:
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 1e4)
If you were using a user-specified gradient argument you'd have to wrap that too.
This CV question points out that you can maximize by minimizing the negative of a function, but doesn't go into the nuts and bolts.
An optim()-based solution would look something like:
optim(fn = TotalSpend,
par = startValVec,
lower = minValVec,
upper = maxValVec,
method = "L-BFGS-B",
control = list(maxit=100000, fnscale=-1),
Budget = 1e4)
L-BFGS-B is the only method built into to optim() that does box-constrained optimization
optim() doesn't have separate controls for max iterations and max function evaluations
Here is an example with a simple parabolic function, It works the same with nlminband optim:
## ==== Some preliminaries ========================
par(mfrow=c(1,2))
a <- b <- seq(-10, 10, 0.1)
## ==== Search for a minimum ======================
# function has minimum
f1 <- function(a, b) {
(a - 1)^2 + (b - 2)^2
}
## show function, blue color is low
image(a, b, outer(a, b, f1), col=topo.colors(16))
## wrapper: combine parameters
g1 <- function(p) f1(p["a"], p["b"])
## minimization
(ret <- nlminb(c(a=0, b=0), g1))
## show minimum
points(t(ret$par), pch="+", cex=2)
## ==== Search for a maximum =======================
## function has a maximum
f2 <- function(a, b) {
- (a - 1)^2 - (b + 2)^2
}
## brown color is high
image(a, b, outer(a, b, f2), col=topo.colors(16))
## wrapper: combine parameters, invert sign
g2 <- function(p) -f2(p["a"], p["b"])
## minimization of negative objective = maximization
(ret <- nlminb(c(a=0, b=0), g2))
## show maximum
points(t(ret$par), pch="+", cex=2)
I first want to sample 100 gamma distributed numbers where shape = 2 and scale = 1/2. I wrote down the log-likelyhood function and negated it since I'm using a minimization tool to maximize. I also tried using optim but to no avail. both optim and nlm gave me different answers. This is my code thus far:
N = 100
shape = 2
scale = 1/2
Data <- rgamma(SampSize, shape, scale)
LogL = function (x){
k = x[1]
gamma = x[2]
(-1)*(N*x[1]*log(x[2])+(x[1]-1)*sum(log(Data))-x[2]*sum(Data))
}
nlm(LogL,c(1.5,1))
logL <- function (x) -sum(dgamma(Data, x[1], x[2], log = TRUE))
N = 100
shape = 2
scale = 1/2
Data <- rgamma(N, shape, scale)
optim(c(1.5, 1), logL)$par
nlm(logL, c(1.5, 1))$estimate
I don't seem to understand how the simulated annealing algorithm used by GenSA always arrives to the same solution when argument par is provided:
library(GenSA)
Rastrigin <- function(x) {
sum(x^2 - 10 * cos(2 * pi * x)) + 10 * length(x)
}
niter <- 10
sol <- data.frame(with.par = NaN*seq(niter))
sol$without.par <- NaN
for(i in seq(niter)){
fit1 <- GenSA(
par = c(1,1),
fn = Rastrigin,
lower = c(-5.12, -5.12),
upper = c(5.12, 5.12)
)
fit2 <- GenSA(
fn = Rastrigin,
lower = c(-5.12, -5.12),
upper = c(5.12, 5.12)
)
sol$with.par[i] <- min(as.data.frame(fit1$trace.mat)$function.value)
sol$without.par[i] <- min(as.data.frame(fit2$trace.mat)$function.value)
}
sol
resulting in:
with.par without.par
1 2.209873e-09 2.142819e-09
2 2.209873e-09 2.209873e-09
3 2.209873e-09 2.142819e-09
4 2.209873e-09 2.209873e-09
5 2.209873e-09 2.142819e-09
6 2.209873e-09 2.209873e-09
7 2.209873e-09 2.209873e-09
8 2.209873e-09 2.209873e-09
9 2.209873e-09 2.209870e-09
10 2.209873e-09 2.209873e-09
Even when par = NULL, there is quite a high degree of convergence. Either way, does this not go against the idea of a random search in SA?
After digging in the source code of GenSA a little bit, I have realized that you are correct. If you set the par, you will always get the same result. The seed of R is only used in the following lines here:
else {
if (con$verbose) {
cat("Initializing par with random data inside bounds\n")
}
par <- vector()
#initialize par with random values in the bounds
par <- lower + runif(length(lower))*(upper-lower)
}
which generates the random initial point if it's not specified, before passing it into the C++ engine. So if you set the par yourself, the GenSA will always return the same result because the C++ engine is seeded seperately from R.
This does not mean it's not using random search though, check how convergence happens:
It is a random search, but if you explicity set the par, you will use the same seed, hence get the same result every time.
library(GenSA)
library(ggplot2)
library(data.table)
Rastrigin <- function(x) {
sum(x^2 - 10 * cos(2 * pi * x)) + 10 * length(x)
}
fit1 <- GenSA(
par = c(1,1),
fn = Rastrigin,
lower = c(-5.12, -5.12),
upper = c(5.12, 5.12)
)
fit2 <- GenSA(
fn = Rastrigin,
lower = c(-5.12, -5.12),
upper = c(5.12, 5.12)
)
res = data.table(fit1$trace.mat)[,.SD[1,], .(nb.steps)]
res2 = data.table(fit2$trace.mat)[,.SD[1,], .(nb.steps)]
p = ggplot() +
geom_line(data = res[1:250,], aes( x=nb.steps, y=function.value, colour = "par")) +
geom_line(data = res2[1:250,], aes( x=nb.steps, y=function.value, colour = "no-par"))
print(p)
If you run this code multiple times, blue line will be exactly the same every time, while red line will change.
The package has been updated to version 1.1.7 including the "seed" option in the control list argument of the GenSA function to seed the internal random generator.
Best regards,
Sylvain.
Assume A follows Exponential distribution; B follows Gamma distribution
How to plot the PDF of 0.5*(A+B)
This is fairly straight forward using the "distr" package:
library(distr)
A <- Exp(rate=3)
B <- Gammad(shape=2, scale=3)
conv <- 0.5*(A+B)
plot(conv)
plot(conv, to.draw.arg=1)
Edit by JD Long
Resulting plot looks like this:
If you're just looking for fast graph I usually do the quick and dirty simulation approach. I do some draws, slam a Gaussian density on the draws and plot that bad boy:
numDraws <- 1e6
gammaDraws <- rgamma(numDraws, 2)
expDraws <- rexp(numDraws)
combined <- .5 * (gammaDraws + expDraws)
plot(density(combined))
output should look a little like this:
Here is an attempt at doing the convolution (which #Jim Lewis refers to) in R. Note that there are probably much more efficient ways of doing this.
lower <- 0
upper <- 20
t <- seq(lower,upper,0.01)
fA <- dexp(t, rate = 0.4)
fB <- dgamma(t,shape = 8, rate = 2)
## C has the same distribution as (A + B)/2
dC <- function(x, lower, upper, exp.rate, gamma.rate, gamma.shape){
integrand <- function(Y, X, exp.rate, gamma.rate, gamma.shape){
dexp(Y, rate = exp.rate)*dgamma(2*X-Y, rate = gamma.rate, shape = gamma.shape)*2
}
out <- NULL
for(ix in seq_along(x)){
out[ix] <-
integrate(integrand, lower = lower, upper = upper,
X = x[ix], exp.rate = exp.rate,
gamma.rate = gamma.rate, gamma.shape = gamma.shape)$value
}
return(out)
}
fC <- dC(t, lower=lower, upper=upper, exp.rate=0.4, gamma.rate=2, gamma.shape=8)
## plot the resulting distribution
plot(t,fA,
ylim = range(fA,fB,na.rm=TRUE,finite = TRUE),
xlab = 'x',ylab = 'f(x)',type = 'l')
lines(t,fB,lty = 2)
lines(t,fC,lty = 3)
legend('topright', c('A ~ exp(0.4)','B ~ gamma(8,2)', 'C ~ (A+B)/2'),lty = 1:3)
I'm not an R programmer, but it might be helpful to know that for independent random variables with PDFs f1(x) and f2(x), the PDF
of the sum of the two variables is given by the convolution f1 * f2 (x) of the two input PDFs.