constrained optimization in R setting up constraints - r

I have been trying to solve a constrained optimization problem in R using constrOptim() (my first time) but am struggling to set up the constraints for my problem.
The problem is pretty straight forward and i can set up the function ok but am a bit at a loss about passing the constraints in.
e.g. problem i've defined is (am going to start with N fixed at 1000 say so i just want to solve for X ultimately i'd like to choose both N and X that max profit):
so i can set up the function as:
fun <- function(x, N, a, c, s) { ## a profit function
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
a1 <- a[1]
a2 <- a[2]
a3 <- a[3]
c1 <- c[1]
c2 <- c[2]
c3 <- c[3]
s1 <- s[1]
s2 <- s[2]
s3 <- s[3]
((N*x1*a1*s1)-(N*x1*c1))+((N*x2*a2*s2)-(N*x2*c2))+((N*x3*a3*s3)-(N*x3*c3))
}
The constraints i need to implement are that:
x1>=0.03
x1<=0.7
x2>=0.03
x2<=0.7
x3>=0.03
x2<=0.7
x1+x2+x3=1
The X here represents buckets into which i need to optimally allocate N, so x1=pecent of N to place in bucket 1 etc. with each bucket having at least 3% but no more than 70%.
Any help much appreciated...
e.g. here is an example i used to test the function does what i want:
fun <- function(x, N, a, c, s) { ## profit function
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
a1 <- a[1]
a2 <- a[2]
a3 <- a[3]
c1 <- c[1]
c2 <- c[2]
c3 <- c[3]
s1 <- s[1]
s2 <- s[2]
s3 <- s[3]
((N*x1*a1*s1)-(N*x1*c1))+((N*x2*a2*s2)-(N*x2*c2))+((N*x3*a3*s3)-(N*x3*c3))
};
x <-matrix(c(0.5,0.25,0.25));
a <-matrix(c(0.2,0.15,0.1));
s <-matrix(c(100,75,50));
c <-matrix(c(10,8,7));
N <- 1000;
fun(x,N,a,c,s);

You can use The lpSolveAPI package.
## problem constants
a <- c(0.2, 0.15, 0.1)
s <- c(100, 75, 50)
c <- c(10, 8, 7)
N <- 1000
## Problem formulation
# x1 >= 0.03
# x1 <= 0.7
# x2 >= 0.03
# x2 <= 0.7
# x3 >= 0.03
# x1 +x2 + x3 = 1
#N*(c1- a1*s1)* x1 + (a2*s2 - c2)* x2 + (a3*s3- c3)* x3
library(lpSolveAPI)
my.lp <- make.lp(6, 3)
The best way to build a model in lp solve is columnwise;
#constraints by columns
set.column(my.lp, 1, c(1, 1, 0, 0, 1, 1))
set.column(my.lp, 2, c(0, 0, 1, 1, 0, 1))
set.column(my.lp, 3, c(0, 0, 0, 0, 1, 1))
#the objective function ,since we need to max I set negtive max(f) = -min(f)
set.objfn (my.lp, -N*c(c[1]- a[1]*s[1], a[2]*s[2] - c[2],a[3]*s[3]- c[3]))
set.rhs(my.lp, c(rep(c(0.03,0.7),2),0.03,1))
#constraint types
set.constr.type(my.lp, c(rep(c(">=","<="), 2),">=","="))
take a look at my model
my.lp
Model name:
Model name:
C1 C2 C3
Minimize 10000 -3250 2000
R1 1 0 0 >= 0.03
R2 1 0 0 <= 0.7
R3 0 1 0 >= 0.03
R4 0 1 0 <= 0.7
R5 1 0 1 >= 0.03
R6 1 1 1 = 1
Kind Std Std Std
Type Real Real Real
Upper Inf Inf Inf
Lower 0 0 0
solve(my.lp)
[1] 0 ## sucess :)
get.objective(my.lp)
[1] -1435
get.constraints(my.lp)
[1] 0.70 0.70 0.03 0.03 0.97 1.00
## the decisions variables
get.variables(my.lp)
[1] 0.03 0.70 0.27

Hi Just in case of use to anyone i also found an answer as below:
First of all, your objective function can be written a lot more concisely using vector operations:
> my_obj_coeffs <- function(N,a,c,s) N*(a*s-c)
> fun <- function(x,N,a,c,s) sum(my_obj_coeffs(N,a,c,s) * x)
You're trying to solve a linear program, so you can use solve it using the simplex algorithm. There's a lightweight implementation of it in the 'boot' package.
> library(boot)
> solution <- function(obj) simplex(obj, diag(3), rep(0.7,3), diag(3), rep(0.03,3), rep(1,3), 1, maxi=TRUE)
Then for the example parameters you used, you can call that solution function:
> a <- c(0.2,0.15,0.1)
> s <- c(100,75,50)
> c <- c(10,8,7)
> N <- 1000
> solution(my_obj_coeffs(N,a,c,s))
Linear Programming Results
Call : simplex(a = obj(N, a, s, c), A1 = diag(3), b1 = rep(0.7, 3),
A2 = diag(3), b2 = rep(0.03, 3), A3 = matrix(1, 1, 3), b3 = 1,
maxi = TRUE)
Maximization Problem with Objective Function Coefficients
[,1]
[1,] 10000
[2,] 3250
[3,] -2000
attr(,"names")
[1] "x1" "x2" "x3"
Optimal solution has the following values
x1 x2 x3
0.70 0.27 0.03
The optimal value of the objective function is 7817.5.

Related

How to perform nonlinear least squares with shared parameters in R?

I would like to perform nonlinear least squares regression in R where I simultaneously minimize the squared residuals of three models (see below). Now, the three models share some of the parameters, in my example, parameters b and d.
Is there a way of doing this with either nls(), or, either packages minpack.lm or nlsr?
So, ideally, I would like to generate the objective function (the sum of least squares of all models together) and regress all parameters at once: a1, a2, a3, b, c1, c2, c3 and d.
(I am trying to avoid running three independent regressions and then perform some averaging on b and d.)
my_model <- function(x, a, b, c, d) {
a * b ^ (x - c) + d
}
# x values
x <- seq(0, 10, 0.2)
# Shared parameters
b <- 2
d <- 10
a1 <- 1
c1 <- 1
y1 <- my_model(x,
a = a1,
b = b,
c = c1,
d = d) + rnorm(length(x))
a2 <- 2
c2 <- 5
y2 <- my_model(x,
a = a2,
b = b,
c = c2,
d = d) + rnorm(length(x))
a3 <- -2
c3 <- 3
y3 <- my_model(x,
a = a3,
b = b,
c = c3,
d = d) + rnorm(length(x))
plot(
y1 ~ x,
xlim = range(x),
ylim = d + c(-50, 50),
type = 'b',
col = 'red',
ylab = 'y'
)
lines(y2 ~ x, type = 'b', col = 'green')
lines(y3 ~ x, type = 'b', col = 'blue')
Below we run nls (using a slightly modified model) and nlxb (from nlsr) but nlxb stops before convergence. Desite these problems both of these nevertheless do give results which visually fit the data well. These problems suggest that there are problems with the model itself so in the Other section, guided by the nlxb output, we show how to fix the model giving a submodel of the original model which fits the data easily with both nls and nlxb and also gives a good fit. At the end in the Notes section we provide the data in reproducible form.
nls
Assuming the setup shown reproducibly in the Note at the end, reformulate the problem for the nls plinear algorithm by defining a right hand side matrix whose columns multiply each of the linear parameters, a1, a2, a3 and d, respectively. plinear does not require starting values for those simplifying the setup. It will report them as .lin1, .lin2, .lin3 and .lin4 respectively.
To get starting values we used a simpler model with no grouping and a grid search over b from 1 to 10 and c also from 1 to 10 using nls2 in the package of the same name. We also found that nls still produced errors but by using abs in the formula, as shown, it ran to completion.
The problems with the model suggest that there is a fundamental problem with it and in the Other section we discuss how to fix it up.
xx <- c(x, x, x)
yy <- c(y1, y2, y3)
# startingi values using nls2
library(nls2)
fo0 <- yy ~ cbind(b ^ abs(xx - c), 1)
st0 <- data.frame(b = c(1, 10), c = c(1, 10))
fm0 <- nls2(fo0, start = st0, alg = "plinear-brute")
# run nls using starting values from above
g <- rep(1:3, each = length(x))
fo <- yy ~ cbind((g==1) * b ^ abs(xx - c[g]),
(g==2) * b ^ abs(xx - c[g]),
(g==3) * b ^ abs(xx - c[g]),
1)
st <- with(as.list(coef(fm0)), list(b = b, c = c(c, c, c)))
fm <- nls(fo, start = st, alg = "plinear")
plot(yy ~ xx, col = g)
for(i in unique(g)) lines(predict(fm) ~ xx, col = i, subset = g == i)
fm
giving:
Nonlinear regression model
model: yy ~ cbind((g == 1) * b^abs(xx - c[g]), (g == 2) * b^abs(xx - c[g]), (g == 3) * b^abs(xx - c[g]), 1)
data: parent.frame()
b c1 c2 c3 .lin1 .lin2 .lin3 .lin4
1.997 0.424 1.622 1.074 0.680 0.196 -0.532 9.922
residual sum-of-squares: 133
Number of iterations to convergence: 5
Achieved convergence tolerance: 5.47e-06
(continued after plot)
nlsr
With nlsr it would be done like this. No grid search for starting values was needed and adding abs was not needed either. The b and d values seem similar to the nls solution but the other coefficients differ. Visually both solutions seem to fit the data.
On the other hand from the JSingval column we see that the jacobian is rank deficient which caused it to stop and not produce SE values and the convergence is in doubt (although it may be sufficient given that visually the plot, not shown, seems like a good fit). We discuss how to fix this up in the Other section.
g1 <- g == 1; g2 <- g == 2; g3 <- g == 3
fo2 <- yy ~ g1 * (a1 * b ^ (xx - c1) + d) +
g2 * (a2 * b ^ (xx - c2) + d) +
g3 * (a3 * b ^ (xx - c3) + d)
st2 <- list(a1 = 1, a2 = 1, a3 = 1, b = 1, c1 = 1, c2 = 1, c3 = 1, d = 1)
fm2 <- nlxb(fo2, start = st2)
fm2
giving:
vn: [1] "yy" "g1" "a1" "b" "xx" "c1" "d" "g2" "a2" "c2" "g3" "a3" "c3"
no weights
nlsr object: x
residual sumsquares = 133.45 on 153 observations
after 16 Jacobian and 22 function evaluations
name coeff SE tstat pval gradient JSingval
a1 3.19575 NA NA NA 9.68e-10 4097
a2 0.64157 NA NA NA 8.914e-11 662.5
a3 -1.03096 NA NA NA -1.002e-09 234.9
b 1.99713 NA NA NA -2.28e-08 72.57
c1 2.66146 NA NA NA -2.14e-09 10.25
c2 3.33564 NA NA NA -3.955e-11 1.585e-13
c3 2.0297 NA NA NA -7.144e-10 1.292e-13
d 9.92363 NA NA NA -2.603e-12 3.271e-14
We can calculate SE's using nls2 as a second stage but this still does not address the problem with the whole lthing that the singular values suggest.
summary(nls2(fo2, start = coef(fm2), algorithm = "brute-force"))
giving:
Formula: yy ~ g1 * (a1 * b^(xx - c1) + d) + g2 * (a2 * b^(xx - c2) + d) +
g3 * (a3 * b^(xx - c3) + d)
Parameters:
Estimate Std. Error t value Pr(>|t|)
a1 3.20e+00 5.38e+05 0.0 1
a2 6.42e-01 3.55e+05 0.0 1
a3 -1.03e+00 3.16e+05 0.0 1
b 2.00e+00 2.49e-03 803.4 <2e-16 ***
c1 2.66e+00 9.42e-02 28.2 <2e-16 ***
c2 3.34e+00 2.43e+05 0.0 1
c3 2.03e+00 8.00e+05 0.0 1
d 9.92e+00 4.42e+05 0.0 1
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.959 on 145 degrees of freedom
Number of iterations to convergence: 8
Achieved convergence tolerance: NA
Other
When nls has trouble fitting a model it often suggests that there is something wrong with the model itself. Playing around with it a bit, guided by the JSingval column in nlsr output above which suggests that c parameters or d might be the problem, we find that if we fix all c parameter values to 0 then the model is easy to fit given sufficiently good starting values and it still gives a low residual sum of squares.
library(nls2)
fo3 <- yy ~ cbind((g==1) * b ^ xx, (g==2) * b ^ xx, (g==3) * b ^ xx, 1)
st3 <- coef(fm0)["b"]
fm3 <- nls(fo3, start = st3, alg = "plinear")
giving:
Nonlinear regression model
model: yy ~ cbind((g == 1) * b^xx, (g == 2) * b^xx, (g == 3) * b^xx, 1)
data: parent.frame()
b .lin1 .lin2 .lin3 .lin4
1.9971 0.5071 0.0639 -0.2532 9.9236
residual sum-of-squares: 133
Number of iterations to convergence: 4
Achieved convergence tolerance: 1.67e-09
which the following anova indicates is comparable to fm from above despite having 3 fewer parameters:
anova(fm3, fm)
giving:
Analysis of Variance Table
Model 1: yy ~ cbind((g == 1) * b^xx, (g == 2) * b^xx, (g == 3) * b^xx, 1)
Model 2: yy ~ cbind((g == 1) * b^abs(xx - c[g]), (g == 2) * b^abs(xx - c[g]), (g == 3) * b^abs(xx - c[g]), 1)
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 148 134
2 145 133 3 0.385 0.14 0.94
We can redo fm3 using nlxb like this:
fo4 <- yy ~ g1 * (a1 * b ^ xx + d) +
g2 * (a2 * b ^ xx + d) +
g3 * (a3 * b ^ xx + d)
st4 <- list(a1 = 1, a2 = 1, a3 = 1, b = 1, d = 1)
fm4 <- nlxb(fo4, start = st4)
fm4
giving:
nlsr object: x
residual sumsquares = 133.45 on 153 observations
after 24 Jacobian and 33 function evaluations
name coeff SE tstat pval gradient JSingval
a1 0.507053 0.005515 91.94 1.83e-132 8.274e-08 5880
a2 0.0638554 0.0008735 73.11 4.774e-118 1.26e-08 2053
a3 -0.253225 0.002737 -92.54 7.154e-133 -4.181e-08 2053
b 1.99713 0.002294 870.6 2.073e-276 -2.55e-07 147.5
d 9.92363 0.09256 107.2 3.367e-142 -1.219e-11 10.26
Note
The assumed input below is the same as in the question except we additionally
set the seed to make it reproducible.
set.seed(123)
my_model <- function(x, a, b, c, d) a * b ^ (x - c) + d
x <- seq(0, 10, 0.2)
b <- 2; d <- 10 # shared
a1 <- 1; c1 <- 1
y1 <- my_model(x, a = a1, b = b, c = c1, d = d) + rnorm(length(x))
a2 <- 2; c2 <- 5
y2 <- my_model(x, a = a2, b = b, c = c2, d = d) + rnorm(length(x))
a3 <- -2; c3 <- 3
y3 <- my_model(x, a = a3, b = b, c = c3, d = d) + rnorm(length(x))
I'm not sure this is really the best way, but you could minimize the sum of the squared residuals using optim().
#start values
params <- c(a1=1, a2=1, a3=1, b=1, c1=1, c2=1, c3=1,d=1)
# minimize total sum of squares of residuals
fun <- function(p) {
sum(
(y1-my_model(x, p["a1"], p["b"], p["c1"], p["d"]))^2 +
(y2-my_model(x, p["a2"], p["b"], p["c2"], p["d"]))^2 +
(y3-my_model(x, p["a3"], p["b"], p["c3"], p["d"]))^2
)
}
out <- optim(params, fun, method="BFGS")
out$par
# a1 a2 a3 b c1 c2 c3
# 0.8807542 1.0241804 -2.8805848 1.9974615 0.7998103 4.0030597 3.5184600
# d
# 9.8764917
And we can add the plots on top of the image
curve(my_model(x, out$par["a1"], out$par["b"], out$par["c1"], out$par["d"]), col="red", add=T)
curve(my_model(x, out$par["a2"], out$par["b"], out$par["c2"], out$par["d"]), col="green", add=T)
curve(my_model(x, out$par["a3"], out$par["b"], out$par["c3"], out$par["d"]), col="blue", add=T)

Loop for Correlation in R

I trying to find a way to do a nested for loop in r to get every possible correlation combination of this:
cor(y, column1* column2),
cor(y, column1* column3),
cor(y, column1* column4)
and so on
This is what I have tried so far:
for(i in 1:length(dataframe))
{
for(j in 1:length(dataframe))
{
joint_correlation(i,j)=cor(y ~ dataframe(i) * dataframe(j));
}
}
My dataframe has 115 columns like shown with a small sample:
FG_pct FGA FT FT_pct FTA GP GS GmSc MP ORB
0.625 8 0 0.00 0 1 0 6.6 28.4 2
0.500 4 0 0.00 1 2 0 2.1 17.5 0
0.000 1 0 0.00 0 3 0 1.2 6.6 1
0.500 6 0 0.00 0 4 0 3.6 13.7 1
0.500 2 0 0.00 0 5 0 0.9 7.4 1
I want to find the correlation for cor(MP, column1* column2) for every possible combination switched out for column1 and column2. This way, I wouldn't have to do every single one of them separately. If possible, I would like to save the output for each correlation combination cor(MP, column1* column2), cor(MP, column1* column3),cor(MP, column2* column4), etc. in a separate column.
This is an example of what I want:
cor(MP, FG_pct*FT_pct)
Edit: Jean-Claude Arbaut gives a better answers, as commented to this answer. Use cor(df).
Here is my botched answer: Using the library corrgram (Which is mainly a visual tool) we can easily get all combinations of correlations in a dataset. Example:
library(corrgram)
#Example data
df <- data.frame(x = rnorm(50, 5, 5),
y = rnorm(50, 2, 5))
df$z <- df$x / df$y
df$abc <- df$x * df$y * df$z
#panel arguments are necessary if you want to visualize correlations
corr <- corrgram(df,
order = F,
lower.panel = panel.cor,
upper.panel = panel.pts,
text.panel = panel.txt,
diag.panel = panel.minmax,
main = "Correlation")
#call corr gives
corr
x y z abc
x 1.00000000 0.07064179 0.1402051 0.89166002
y 0.07064179 1.00000000 0.2495239 0.08024278
z 0.14020508 0.24952388 1.0000000 0.14649093
abc 0.89166002 0.08024278 0.1464909 1.00000000
There is absolutely a better way for doing this with functions and without a package, but its early here, and if you are desperate to get the results this will probably do you fine.
p.s using the corrgram() function without assigning it will give you a nice visualization of your correlations.
Assuming you want the correlations of every column multiplied by combinations of two of the remaining columns.
We can find the names of according combinations using combn(names(dat), 2) which we put into an lapply.
combs <- do.call(cbind.data.frame,
lapply("MP", rbind, combn(names(dat)[names(dat) != "MP"], 2)))
combs
# 1 2 3
# 1 MP MP MP
# 2 FG_pct FG_pct FGA
# 3 FGA FT FT
In another lapply we subset the data on the name-combinations and calculate cor with formula cor(x1 ~ x2 * x3). Simultaneously we store the names pasted as formula in an attribute, to remember later what we've calculated in each iteration.
res.l <- lapply(combs, function(x) {
`attr<-`(cor(dat[,x[1]], dat[,x[2]]*dat[,x[3]]),
"what", {
paste0(x[1], ", ", paste(x[2], "*", x[3]))})
})
Finally we unlist and setNames according to the attributes.
res <- setNames(unlist(res.l), sapply(res.l, attr, "what"))
res
Result
# MP, FG_pct * FGA MP, FG_pct * FT MP, FGA * FT
# 0.2121374 0.2829003 0.4737892
Check:
(Note, that you can directly put the names, e.g. MP, FG_pct * FGA into the cor function.)
with(dat, cor(MP, FG_pct * FGA))
# [1] 0.2121374
with(dat, cor(MP, FG_pct * FT))
# [1] 0.2829003
with(dat, cor(MP, FGA * FT))
# [1] 0.4737892
To sort, use e.g. sort(res) or rev(sort(res)).
Toy data:
set.seed(42)
dat <- as.data.frame(`colnames<-`(MASS::mvrnorm(n=1e4,
mu=c(0.425, 4.2, 0.2, 3),
Sigma=matrix(c(1, .3, .7, 0,
.3, 1, .5, 0,
.7, .5, 1, 0,
0, 0, 0, 1), nrow=4),
empirical=T), c("FG_pct", "MP", "FGA", "FT")))

set constrains in a liner model using R

I am trying to predict future market share using the following linear model.
x <- c(20, 60, 80,130)
y <- c(0.8, 0.15,0.05, 0.0)
z <-data.frame(x, y)
z.fit <- lm(y~x, data=z)
z.prediction <- predict(z.fit, data.frame(x=c(30, 65, 70, 100)), interval="prediction")
The results are above 1, see below:
fit
1 0.544
2 0.301
3 0.267
4 0.059
How can I set a constrain in the code for sum of "Share" =1?
The question did not define Share but assuming it is the fitted values, try the CVXR package. Note that a nonnegativity constraint has been added but you can drop it if negative values are acceptable.
library(CVXR)
b <- Variable(2)
pred <- b[1] + b[2] * x
objective <- Minimize(sum((y - pred)^2))
constraints <- list(sum(pred) == 1, pred >= 0)
problem <- Problem(objective, constraints)
soln <- solve(problem)
bval <- soln$getValue(b)
bval
## [,1]
## [1,] 0.565217391
## [2,] -0.004347826
# check constraints
predval <- soln$getValue(pred)
round(predval, 5)
## [,1]
## [1,] 0.47826
## [2,] 0.30435
## [3,] 0.21739
## [4,] 0.00000
sum(predval)
## [1] 1

Mixed Integer Programming in R - Indicator functions

I hope this message finds you well.
I am trying to solve an optimization problem formulated as a Mixed Integer Program with the lpSolveAPI R-package. However, there are indicator functions in the objective function and in some constraints. To be more specific, consider the following optimization problem:
min{ 2.8 * x1 + 3.2 * x2 + 3.5 * x3 +
17.5 * delta(x1) + 2.3 * delta(x2) + 5.5 * delta(x3) }
subject to:
0.4 * x1 + 8.7 * x2 + 4.5 * x3 <=
387 - 3 * delta(x1) - 1 * delta(x2) - 3 * delta(x3)
x1 <= 93 * delta(x1)
x2 <= 94 * delta(x2),
x3 <= 100 * delta(x3), and
x1, x2, and x3 are non-negative integers.
In this problem, for all i in {1, 2, 3}, delta(xi) = 1 if xi > 0, whereas delta(xi) = 0 otherwise.
The R-code I have so far is:
install.packages("lpSolveAPI")
library(lpSolveAPI)
a <- c(3, 1, 3)
b <- c(0.4, 8.7, 4.5)
q <- 387
M <- c(93, 94, 100)
A <- c(17.5, 2.3, 5.5)
h <- c(2.8, 3.2, 3.5)
Fn <- function(u1, u2, u3, u4){
lprec <- make.lp(0, 3)
lp.control(lprec, "min")
set.objfn(lprec, u1)
add.constraint(lprec, u2, "<=", u3)
set.bounds(lprec, lower = rep(0, 3), upper = u4)
set.type(lprec, columns = 1:3, type = "integer")
solve(lprec)
return(list(Soln = get.variables(lprec), MinObj = get.objective(lprec)))
}
TheTest <- Fn(u1 = h, u2 = b, u3 = q, u4 = M)
Please, I was wondering if someone could tell me how to put delta functions into this R-code to solve the aforementioned optimization problem.
Rodrigo.
A constraint like x1 <= 93 * delta(x1) looks very strange to me. I think this is just x1 <= 93. For a MIP solver replace the function delta(x) by a binary variable d. Then add the constraint d <= x <= M*d where M is an upper bound on x. To be explicit, for your model we have:
min 2.8*x1 + 3.2*x2 + 3.5*x3 + 17.5*d1 + 2.3*d2 + 5.5*d3
0.4*x1 + 8.7*x2 + 4.5*x3 <= 387 - 3*d1 - d2 - 3*d3
d1 <= x1 <= 93*d1
d2 <= x2 <= 94*d2
d3 <= x3 <= 100*d3
x1 integer in [0,93]
x2 integer in [0,94]
x3 integer in [0,100]
d1,d2,d3 binary
This is now trivial to solve with any MIP solver. Note that a double inequality like d1 <= x1 <= 93*d1 can be written as two inequalities: d1<=x1 and x1<=93*d1.

Optimization with Constraints

I am working with the output from a model in which there are parameter estimates that may not follow a-priori expectations. I would like to write a function that forces these utility estimates back in line with those expectations. To do this, the function should minimize the sum of the squared deviance between the starting values and the new estimates. Since we have a-priori expections, the optimization should be subject to the following constraints:
B0 < B1
B1 < B2
...
Bj < Bj+1
For example, the raw parameter estimates below are flipflopped for B2 and B3. The columns Delta and Delta^2 show the deviance between the original parameter estimate and the new coefficient. I am trying to minimize the column Delta^2. I've coded this up in Excel and shown how Excel's Solver would optimize this problem providing the set of constraints:
Beta BetaRaw Delta Delta^2 BetaNew
B0 1.2 0 0 1.2
B1 1.3 0 0 1.3
B2 1.6 -0.2 0.04 1.4
B3 1.4 0 0 1.4
B4 2.2 0 0 2.2
After reading through ?optim and ?constrOptim, I'm not able to grok how to set this up in R. I'm sure I'm just being a bit dense, but could use some pointers in the right direction!
3/24/2012 - Added bounty since I'm not smart enough to translate the first answer.
Here's some R code that should be on the right path. Assuming that the betas start with:
betas <- c(1.2,1.3,1.6,1.4,2.2)
I want to minimize the following function such that b0 <= b1 <= b2 <= b3 <= b4
f <- function(x) {
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
x5 <- x[5]
loss <- (x1 - betas[1]) ^ 2 +
(x2 - betas[2]) ^ 2 +
(x3 - betas[3]) ^ 2 +
(x4 - betas[4]) ^ 2 +
(x5 - betas[5]) ^ 2
return(loss)
}
To show that the function works, the loss should be zero if we pass the original betas in:
> f(betas)
[1] 0
And relatively large with some random inputs:
> set.seed(42)
> f(rnorm(5))
[1] 8.849329
And minimized at the values I was able to calculate in Excel:
> f(c(1.2,1.3,1.4,1.4,2.2))
[1] 0.04
1.
Since the objective is quadratic and the constraints linear,
you can use solve.QP.
It finds the b that minimizes
(1/2) * t(b) %*% Dmat %*% b - t(dvec) %*% b
under the constraints
t(Amat) %*% b >= bvec.
Here, we want b that minimizes
sum( (b-betas)^2 ) = sum(b^2) - 2 * sum(b*betas) + sum(beta^2)
= t(b) %*% t(b) - 2 * t(b) %*% betas + sum(beta^2).
Since the last term, sum(beta^2), is constant, we can drop it,
and we can set
Dmat = diag(n)
dvec = betas.
The constraints are
b[1] <= b[2]
b[2] <= b[3]
...
b[n-1] <= b[n]
i.e.,
-b[1] + b[2] >= 0
- b[2] + b[3] >= 0
...
- b[n-1] + b[n] >= 0
so that t(Amat) is
[ -1 1 ]
[ -1 1 ]
[ -1 1 ]
[ ... ]
[ -1 1 ]
and bvec is zero.
This leads to the following code.
# Sample data
betas <- c(1.2, 1.3, 1.6, 1.4, 2.2)
# Optimization
n <- length(betas)
Dmat <- diag(n)
dvec <- betas
Amat <- matrix(0,nr=n,nc=n-1)
Amat[cbind(1:(n-1), 1:(n-1))] <- -1
Amat[cbind(2:n, 1:(n-1))] <- 1
t(Amat) # Check that it looks as it should
bvec <- rep(0,n-1)
library(quadprog)
r <- solve.QP(Dmat, dvec, Amat, bvec)
# Check the result, graphically
plot(betas)
points(r$solution, pch=16)
2.
You can use constrOptim in the same way (the objective function can be arbitrary, but the constraints have to be linear).
3.
More generally, you can use optim if you reparametrize the problem
into a non-constrained optimization problem,
for instance
b[1] = exp(x[1])
b[2] = b[1] + exp(x[2])
...
b[n] = b[n-1] + exp(x[n-1]).
There are a few examples
here
or there.
Alright, this is starting to take form, but still has some bugs. Based on the conversation in chat with #Joran, it seems I can include a conditional that will set the loss function to an arbitrarily large value if the values are not in order. This seems to work IF the discrepancy occurs between the first two coefficients, but not thereafter. I'm having a hard time parsing out why that would be the case.
Function to minimize:
f <- function(x, x0) {
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
x5 <- x[5]
loss <- (x1 - x0[1]) ^ 2 +
(x2 - x0[2]) ^ 2 +
(x3 - x0[3]) ^ 2 +
(x4 - x0[4]) ^ 2 +
(x5 - x0[5]) ^ 2
#Make sure the coefficients are in order
if any(diff(c(x1,x2,x3,x4,x5)) > 0) loss = 10000000
return(loss)
}
Working example (sort of, it seems the loss would be minimized if b0 = 1.24?):
> betas <- c(1.22, 1.24, 1.18, 1.12, 1.10)
> optim(betas, f, x0 = betas)$par
[1] 1.282 1.240 1.180 1.120 1.100
Non-working example (note that the third element is still larger than the second:
> betas <- c(1.20, 1.15, 1.18, 1.12, 1.10)
> optim(betas, f, x0 = betas)$par
[1] 1.20 1.15 1.18 1.12 1.10

Resources