Related
I have the following function and i need it to be maximized instead of minimized.
adbudgReturn = function(Spend,a,b,c,d){
adbudgReturn = sum(b+(a-b)*((Spend^c)/(d+(Spend^c))))
return(adbudgReturn)
}
FP_param <- c(95000,0,1.15,700000)
FB_param <- c(23111.55,0,1.15,20000)
GA_param <- c(115004,1409,1.457,2000000)
y = c(0.333333,0.333333,0.333333)
TotalSpend <- function(Budget,y){
FP_clicks = adbudgReturn(Budget * y[1], FP_param[1], FP_param[2], FP_param[3], FP_param[4])
FB_clicks = adbudgReturn(Budget * y[2], FB_param[1], FB_param[2], FB_param[3], FB_param[4])
GA_clicks = adbudgReturn(Budget * y[3], GA_param[1], GA_param[2], GA_param[3], GA_param[4])
return(total = FP_clicks + FB_clicks + GA_clicks)
}
startValVec = c(0.33333,0.333333,0.3333333)
minValVec = c(0,0.2,0)
maxValVec = c(0.8,1,08)
MaxClicks_optim.parms <- nlminb(objective = TotalSpend,start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 10000)
I have tried adding the minus sign in front of the nlminb function i.e:
-nlminb(..)
but without any success. Any help will be appreciated.
Also i would like to add constraints so the sum of the maxValVec = 1
Other optimization functions in R such as optim() have a built-in fnscale control parameter you can use to switch from minimization to maximization (i.e. optim(..., control=list(fnscale=-1)), but nlminb doesn't appear to. So you either need to flip the sign in your original objective function, or (possibly more transparently) make a wrapper function that inverts the sign, e.g.
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
[ .... everything else as before ... ] )
Note that the ... in the max_obj() definition are literal. The only part of the solution above that needs to be filled in is the [.... everything else as a before ...] part. To be absolutely explicit:
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 1e4)
If you were using a user-specified gradient argument you'd have to wrap that too.
This CV question points out that you can maximize by minimizing the negative of a function, but doesn't go into the nuts and bolts.
An optim()-based solution would look something like:
optim(fn = TotalSpend,
par = startValVec,
lower = minValVec,
upper = maxValVec,
method = "L-BFGS-B",
control = list(maxit=100000, fnscale=-1),
Budget = 1e4)
L-BFGS-B is the only method built into to optim() that does box-constrained optimization
optim() doesn't have separate controls for max iterations and max function evaluations
Here is an example with a simple parabolic function, It works the same with nlminband optim:
## ==== Some preliminaries ========================
par(mfrow=c(1,2))
a <- b <- seq(-10, 10, 0.1)
## ==== Search for a minimum ======================
# function has minimum
f1 <- function(a, b) {
(a - 1)^2 + (b - 2)^2
}
## show function, blue color is low
image(a, b, outer(a, b, f1), col=topo.colors(16))
## wrapper: combine parameters
g1 <- function(p) f1(p["a"], p["b"])
## minimization
(ret <- nlminb(c(a=0, b=0), g1))
## show minimum
points(t(ret$par), pch="+", cex=2)
## ==== Search for a maximum =======================
## function has a maximum
f2 <- function(a, b) {
- (a - 1)^2 - (b + 2)^2
}
## brown color is high
image(a, b, outer(a, b, f2), col=topo.colors(16))
## wrapper: combine parameters, invert sign
g2 <- function(p) -f2(p["a"], p["b"])
## minimization of negative objective = maximization
(ret <- nlminb(c(a=0, b=0), g2))
## show maximum
points(t(ret$par), pch="+", cex=2)
Some background: the nlm function in R is a general purpose optimization routine that uses Newton's method. To optimize a function, Newton's method requires the function, as well as the first and second derivatives of the function (the gradient vector and the Hessian matrix, respectively). In R the nlm function allows you to specify R functions that correspond to calculations of the gradient and Hessian, or one can leave these unspecified and numerical solutions are provided based on numerical derivatives (via the deriv function). More accurate solutions can be found by supplying functions to calculate the gradient and Hessian, so it's a useful feature.
My problem: the nlm function is slower and often fails to converge in a reasonable amount of time when the analytic Hessian is supplied. I'm guessing this is some sort of bug in the underlying code, but I'd be happy to be wrong. Is there a way to make nlm work better with an analytic Hessian matrix?
Example: my R code below demonstrates this problem using a logistic regression example, where
log(Pr(Y=1)/Pr(Y=0)) = b0 + Xb
where X is a multivariate normal of dimension N by p and b is a vector of coefficients of length p.
library(mvtnorm)
# example demonstrating a problem with NLM
expit <- function(mu) {1/(1+exp(-mu))}
mk.logit.data <- function(N,p){
set.seed(1232)
U = matrix(runif(p*p), nrow=p, ncol=p)
S = 0.5*(U+t(U)) + p*diag(rep(1,p))
X = rmvnorm(N, mean = runif(p, -1, 1), sigma = S)
Design = cbind(rep(1, N), X)
beta = sort(sample(c(rep(0,p), runif(1))))
y = rbinom(N, 1, expit(Design%*%beta))
list(X=X,y=as.numeric(y),N=N,p=p)
}
# function to calculate gradient vector at given coefficient values
logistic_gr <- function(beta, y, x, min=TRUE){
mu = beta[1] + x %*% beta[-1]
p = length(beta)
n = length(y)
D = cbind(rep(1,n), x)
gri = matrix(nrow=n, ncol=p)
for(j in 1:p){
gri[,j] = D[,j]*(exp(-mu)*y-1+y)/(1+exp(-mu))
}
gr = apply(gri, 2, sum)
if(min) gr = -gr
gr
}
# function to calculate Hessian matrix at given coefficient values
logistic_hess <- function(beta, y, x, min=TRUE){
# allow to fail with NA, NaN, Inf values
mu = beta[1] + x %*% beta[-1]
p = length(beta)
n = length(y)
D = cbind(rep(1,n), x)
h = matrix(nrow=p, ncol=p)
for(j in 1:p){
for(k in 1:p){
h[j,k] = -sum(D[,j]*D[,k]*(exp(-mu))/(1+exp(-mu))^2)
}
}
if(min) h = -h
h
}
# function to calculate likelihood (up to a constant) at given coefficient values
logistic_ll <- function(beta, y,x, gr=FALSE, he=FALSE, min=TRUE){
mu = beta[1] + x %*% beta[-1]
lli = log(expit(mu))*y + log(1-expit(mu))*(1-y)
ll = sum(lli)
if(is.na(ll) | is.infinite(ll)) ll = -1e16
if(min) ll=-ll
# the below specification is required for using analytic gradient/Hessian in nlm function
if(gr) attr(ll, "gradient") <- logistic_gr(beta, y=y, x=x, min=min)
if(he) attr(ll, "hessian") <- logistic_hess(beta, y=y, x=x, min=min)
ll
}
First example, with p=3:
dat = mk.logit.data(N=100, p=3)
The glm function estimates are for reference. nlm should give the same answer, allowing for small errors due to approximation.
(glm.sol <- glm(dat$y~dat$X, family=binomial()))$coefficients
> (Intercept) dat$X1 dat$X2 dat$X3
> 0.00981465 0.01068939 0.04417671 0.01625381
# works when correct analytic gradient is specified
(nlm.sol1 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, y=dat$y, x=dat$X))$estimate
> [1] 0.009814547 0.010689396 0.044176627 0.016253966
# works, but less accurate when correct analytic hessian is specified (even though the routine notes convergence is probable)
(nlm.sol2 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, he=TRUE, y=dat$y, x=dat$X, hessian = TRUE, check.analyticals=TRUE))$estimate
> [1] 0.009827701 0.010687278 0.044178416 0.016255630
But the problem becomes apparent when p is larger, here it is 10
dat = mk.logit.data(N=100, p=10)
Again, glm solution for reference. nlm should give the same answer, allowing for small errors due to approximation.
(glm.sol <- glm(dat$y~dat$X, family=binomial()))$coefficients
> (Intercept) dat$X1 dat$X2 dat$X3 dat$X4 dat$X5 dat$X6 dat$X7
> -0.07071882 -0.08670003 0.16436630 0.01130549 0.17302058 0.03821008 0.08836471 -0.16578959
> dat$X8 dat$X9 dat$X10
> -0.07515477 -0.08555075 0.29119963
# works when correct analytic gradient is specified
(nlm.sol1 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, y=dat$y, x=dat$X))$estimate
> [1] -0.07071879 -0.08670005 0.16436632 0.01130550 0.17302057 0.03821009 0.08836472
> [8] -0.16578958 -0.07515478 -0.08555076 0.29119967
# fails to converge in 5000 iterations when correct analytic hessian is specified
(nlm.sol2 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, he=TRUE, y=dat$y, x=dat$X, hessian = TRUE, iterlim=5000, check.analyticals=TRUE))$estimate
> [1] 0.31602065 -0.06185190 0.10775381 -0.16748897 0.05032156 0.34176104 0.02118631
> [8] -0.01833671 -0.20364929 0.63713991 0.18390489
Edit: I should also add that I have confirmed I have the correct Hessian matrix through multiple different approaches
I tried the code, but at first it seemed to be using a different rmvnorm than I can find on CRAN. I found one rmvnorm in dae package, then one in the mvtnorm package. The latter is the one to use.
nlm() was patched about the time of the above posting. I'm currently trying to verify the patches and it now seems to work OK. Note that I am author of a number of R's optimization codes, including 3/5 in optim().
nashjc at uottawa.ca
Code is below.
Revised code:
# example demonstrating a problem with NLM
expit <- function(mu) {1/(1+exp(-mu))}
mk.logit.data <- function(N,p){
set.seed(1232)
U = matrix(runif(p*p), nrow=p, ncol=p)
S = 0.5*(U+t(U)) + p*diag(rep(1,p))
X = rmvnorm(N, mean = runif(p, -1, 1), sigma = S)
Design = cbind(rep(1, N), X)
beta = sort(sample(c(rep(0,p), runif(1))))
y = rbinom(N, 1, expit(Design%*%beta))
list(X=X,y=as.numeric(y),N=N,p=p)
}
# function to calculate gradient vector at given coefficient values
logistic_gr <- function(beta, y, x, min=TRUE){
mu = beta[1] + x %*% beta[-1]
p = length(beta)
n = length(y)
D = cbind(rep(1,n), x)
gri = matrix(nrow=n, ncol=p)
for(j in 1:p){
gri[,j] = D[,j]*(exp(-mu)*y-1+y)/(1+exp(-mu))
}
gr = apply(gri, 2, sum)
if(min) gr = -gr
gr
}
# function to calculate Hessian matrix at given coefficient values
logistic_hess <- function(beta, y, x, min=TRUE){
# allow to fail with NA, NaN, Inf values
mu = beta[1] + x %*% beta[-1]
p = length(beta)
n = length(y)
D = cbind(rep(1,n), x)
h = matrix(nrow=p, ncol=p)
for(j in 1:p){
for(k in 1:p){
h[j,k] = -sum(D[,j]*D[,k]*(exp(-mu))/(1+exp(-mu))^2)
}
}
if(min) h = -h
h
}
# function to calculate likelihood (up to a constant) at given coefficient values
logistic_ll <- function(beta, y,x, gr=FALSE, he=FALSE, min=TRUE){
mu = beta[1] + x %*% beta[-1]
lli = log(expit(mu))*y + log(1-expit(mu))*(1-y)
ll = sum(lli)
if(is.na(ll) | is.infinite(ll)) ll = -1e16
if(min) ll=-ll
# the below specification is required for using analytic gradient/Hessian in nlm function
if(gr) attr(ll, "gradient") <- logistic_gr(beta, y=y, x=x, min=min)
if(he) attr(ll, "hessian") <- logistic_hess(beta, y=y, x=x, min=min)
ll
}
##!!!! NOTE: Must have this library loaded
library(mvtnorm)
dat = mk.logit.data(N=100, p=3)
(glm.sol <- glm(dat$y~dat$X, family=binomial()))$coefficients
# works when correct analytic gradient is specified
(nlm.sol1 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, y=dat$y, x=dat$X))$estimate
# works, but less accurate when correct analytic hessian is specified (even though the routine notes convergence is probable)
(nlm.sol2 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, he=TRUE, y=dat$y, x=dat$X, hessian = TRUE, check.analyticals=TRUE))$estimate
dat = mk.logit.data(N=100, p=10)
# Again, glm solution for reference. nlm should give the same answer, allowing for small errors due to approximation.
(glm.sol <- glm(dat$y~dat$X, family=binomial()))$coefficients
# works when correct analytic gradient is specified
(nlm.sol1 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, y=dat$y, x=dat$X))$estimate
# fails to converge in 5000 iterations when correct analytic hessian is specified
(nlm.sol2 <- nlm(p=runif(dat$p+1), f=logistic_ll, gr=TRUE, he=TRUE, y=dat$y, x=dat$X, hessian = TRUE, iterlim=5000, check.analyticals=TRUE))$estimate
I estimated the cdf of my density in an interval of length 0.03 with 10k points. Even though my cdf is pretty smooth, my inverse of the cdf isn't smooth at all. Here the interval of length 1 is also evaluated with 10k points.
See: Estimated CDF & Estimated Inverse CDF
For the Inverse CDF I use:
x = seq(from = 0, to = 1, length = 10000)
F_hat_inv_given_x = function(y){
uniroot(function(x){
F_hat_given_x(x)-y
},interval=c(0.065, 0.095))$root
}
F_hat_inv_given_x = Vectorize(F_hat_inv_given_x)
with F_hat_given_x defined as:
F_hat_given_x = function(y) {
integrate(f = f_hat_given_x, min(y_data), y)$value
}
F_hat_given_x <- Vectorize(F_hat_given_x)
where f_hat_given_x is my density:
f_hat_given_x = function(y){
tapply(y, x = x_sample, FUN = f_hat, INDEX = 1:length(y))
}
which is a conditional density f(y | X = x) for a given sample X = x_sample:
f_hat = function(x, y){
(sum(K(abs(x-x_data)/H_n) * (K(abs(y-y_data)/h_n)))) / (h_n * sum(K(abs(x-x_data)/H_n)))
}
with K being the gaussian kernel, h_n, H_n are bandwidths computed via npcdensbw and x_data and y_data are my given data on which I evaluate the conditional density f_hat(x,y)
Any idea why the inverse function isn't monotone increasing like the inverse should? What is causing the inaccuracity?
Given the following function:
f(x) = (1/2*pi)(1/(1+x^2/4))
How do I identify it's distribution and write this distribution function in R?
So this is your function right now (hopefully you know how to write an R function; if not, check writing your own function):
f <- function (x) (pi / 2) * (1 / (1 + 0.25 * x ^ 2))
f is defined on (-Inf, Inf) so integration on this range gives an indefinite integral. Fortunately, it approaches to Inf at the speed of x ^ (-2), so the integral is well defined, and can be computed:
C <- integrate(f, -Inf, Inf)
# 9.869604 with absolute error < 1e-09
C <- C$value ## extract integral value
# [1] 9.869604
Then you want to normalize f, as we know that a probability density should integrate to 1:
f <- function (x) (pi / 2) * (1 / (1 + 0.25 * x ^ 2)) / C
You can draw its density by:
curve(f, from = -10, to = 10)
Now that I have the probably distribution function I was wondering how to create a random sample of say n = 1000 using this new distribution function?
An off-topic question, but OK to answer without your making a new thread. Useful as it turns out subtle.
Compare
set.seed(0); range(simf(1000, 1e-2))
#[1] -56.37246 63.21080
set.seed(0); range(simf(1000, 1e-3))
#[1] -275.3465 595.3771
set.seed(0); range(simf(1000, 1e-4))
#[1] -450.0979 3758.2528
set.seed(0); range(simf(1000, 1e-5))
#[1] -480.5991 8017.3802
So I think e = 1e-2 is reasonable. We could draw samples, make a (scaled) histogram and overlay density curve:
set.seed(0); x <- simf(1000)
hist(x, prob = TRUE, breaks = 50, ylim = c(0, 0.16))
curve(f, add = TRUE, col = 2, lwd = 2, n = 201)
Assume A follows Exponential distribution; B follows Gamma distribution
How to plot the PDF of 0.5*(A+B)
This is fairly straight forward using the "distr" package:
library(distr)
A <- Exp(rate=3)
B <- Gammad(shape=2, scale=3)
conv <- 0.5*(A+B)
plot(conv)
plot(conv, to.draw.arg=1)
Edit by JD Long
Resulting plot looks like this:
If you're just looking for fast graph I usually do the quick and dirty simulation approach. I do some draws, slam a Gaussian density on the draws and plot that bad boy:
numDraws <- 1e6
gammaDraws <- rgamma(numDraws, 2)
expDraws <- rexp(numDraws)
combined <- .5 * (gammaDraws + expDraws)
plot(density(combined))
output should look a little like this:
Here is an attempt at doing the convolution (which #Jim Lewis refers to) in R. Note that there are probably much more efficient ways of doing this.
lower <- 0
upper <- 20
t <- seq(lower,upper,0.01)
fA <- dexp(t, rate = 0.4)
fB <- dgamma(t,shape = 8, rate = 2)
## C has the same distribution as (A + B)/2
dC <- function(x, lower, upper, exp.rate, gamma.rate, gamma.shape){
integrand <- function(Y, X, exp.rate, gamma.rate, gamma.shape){
dexp(Y, rate = exp.rate)*dgamma(2*X-Y, rate = gamma.rate, shape = gamma.shape)*2
}
out <- NULL
for(ix in seq_along(x)){
out[ix] <-
integrate(integrand, lower = lower, upper = upper,
X = x[ix], exp.rate = exp.rate,
gamma.rate = gamma.rate, gamma.shape = gamma.shape)$value
}
return(out)
}
fC <- dC(t, lower=lower, upper=upper, exp.rate=0.4, gamma.rate=2, gamma.shape=8)
## plot the resulting distribution
plot(t,fA,
ylim = range(fA,fB,na.rm=TRUE,finite = TRUE),
xlab = 'x',ylab = 'f(x)',type = 'l')
lines(t,fB,lty = 2)
lines(t,fC,lty = 3)
legend('topright', c('A ~ exp(0.4)','B ~ gamma(8,2)', 'C ~ (A+B)/2'),lty = 1:3)
I'm not an R programmer, but it might be helpful to know that for independent random variables with PDFs f1(x) and f2(x), the PDF
of the sum of the two variables is given by the convolution f1 * f2 (x) of the two input PDFs.