Related
I would like to perform nonlinear least squares regression in R where I simultaneously minimize the squared residuals of three models (see below). Now, the three models share some of the parameters, in my example, parameters b and d.
Is there a way of doing this with either nls(), or, either packages minpack.lm or nlsr?
So, ideally, I would like to generate the objective function (the sum of least squares of all models together) and regress all parameters at once: a1, a2, a3, b, c1, c2, c3 and d.
(I am trying to avoid running three independent regressions and then perform some averaging on b and d.)
my_model <- function(x, a, b, c, d) {
a * b ^ (x - c) + d
}
# x values
x <- seq(0, 10, 0.2)
# Shared parameters
b <- 2
d <- 10
a1 <- 1
c1 <- 1
y1 <- my_model(x,
a = a1,
b = b,
c = c1,
d = d) + rnorm(length(x))
a2 <- 2
c2 <- 5
y2 <- my_model(x,
a = a2,
b = b,
c = c2,
d = d) + rnorm(length(x))
a3 <- -2
c3 <- 3
y3 <- my_model(x,
a = a3,
b = b,
c = c3,
d = d) + rnorm(length(x))
plot(
y1 ~ x,
xlim = range(x),
ylim = d + c(-50, 50),
type = 'b',
col = 'red',
ylab = 'y'
)
lines(y2 ~ x, type = 'b', col = 'green')
lines(y3 ~ x, type = 'b', col = 'blue')
Below we run nls (using a slightly modified model) and nlxb (from nlsr) but nlxb stops before convergence. Desite these problems both of these nevertheless do give results which visually fit the data well. These problems suggest that there are problems with the model itself so in the Other section, guided by the nlxb output, we show how to fix the model giving a submodel of the original model which fits the data easily with both nls and nlxb and also gives a good fit. At the end in the Notes section we provide the data in reproducible form.
nls
Assuming the setup shown reproducibly in the Note at the end, reformulate the problem for the nls plinear algorithm by defining a right hand side matrix whose columns multiply each of the linear parameters, a1, a2, a3 and d, respectively. plinear does not require starting values for those simplifying the setup. It will report them as .lin1, .lin2, .lin3 and .lin4 respectively.
To get starting values we used a simpler model with no grouping and a grid search over b from 1 to 10 and c also from 1 to 10 using nls2 in the package of the same name. We also found that nls still produced errors but by using abs in the formula, as shown, it ran to completion.
The problems with the model suggest that there is a fundamental problem with it and in the Other section we discuss how to fix it up.
xx <- c(x, x, x)
yy <- c(y1, y2, y3)
# startingi values using nls2
library(nls2)
fo0 <- yy ~ cbind(b ^ abs(xx - c), 1)
st0 <- data.frame(b = c(1, 10), c = c(1, 10))
fm0 <- nls2(fo0, start = st0, alg = "plinear-brute")
# run nls using starting values from above
g <- rep(1:3, each = length(x))
fo <- yy ~ cbind((g==1) * b ^ abs(xx - c[g]),
(g==2) * b ^ abs(xx - c[g]),
(g==3) * b ^ abs(xx - c[g]),
1)
st <- with(as.list(coef(fm0)), list(b = b, c = c(c, c, c)))
fm <- nls(fo, start = st, alg = "plinear")
plot(yy ~ xx, col = g)
for(i in unique(g)) lines(predict(fm) ~ xx, col = i, subset = g == i)
fm
giving:
Nonlinear regression model
model: yy ~ cbind((g == 1) * b^abs(xx - c[g]), (g == 2) * b^abs(xx - c[g]), (g == 3) * b^abs(xx - c[g]), 1)
data: parent.frame()
b c1 c2 c3 .lin1 .lin2 .lin3 .lin4
1.997 0.424 1.622 1.074 0.680 0.196 -0.532 9.922
residual sum-of-squares: 133
Number of iterations to convergence: 5
Achieved convergence tolerance: 5.47e-06
(continued after plot)
nlsr
With nlsr it would be done like this. No grid search for starting values was needed and adding abs was not needed either. The b and d values seem similar to the nls solution but the other coefficients differ. Visually both solutions seem to fit the data.
On the other hand from the JSingval column we see that the jacobian is rank deficient which caused it to stop and not produce SE values and the convergence is in doubt (although it may be sufficient given that visually the plot, not shown, seems like a good fit). We discuss how to fix this up in the Other section.
g1 <- g == 1; g2 <- g == 2; g3 <- g == 3
fo2 <- yy ~ g1 * (a1 * b ^ (xx - c1) + d) +
g2 * (a2 * b ^ (xx - c2) + d) +
g3 * (a3 * b ^ (xx - c3) + d)
st2 <- list(a1 = 1, a2 = 1, a3 = 1, b = 1, c1 = 1, c2 = 1, c3 = 1, d = 1)
fm2 <- nlxb(fo2, start = st2)
fm2
giving:
vn: [1] "yy" "g1" "a1" "b" "xx" "c1" "d" "g2" "a2" "c2" "g3" "a3" "c3"
no weights
nlsr object: x
residual sumsquares = 133.45 on 153 observations
after 16 Jacobian and 22 function evaluations
name coeff SE tstat pval gradient JSingval
a1 3.19575 NA NA NA 9.68e-10 4097
a2 0.64157 NA NA NA 8.914e-11 662.5
a3 -1.03096 NA NA NA -1.002e-09 234.9
b 1.99713 NA NA NA -2.28e-08 72.57
c1 2.66146 NA NA NA -2.14e-09 10.25
c2 3.33564 NA NA NA -3.955e-11 1.585e-13
c3 2.0297 NA NA NA -7.144e-10 1.292e-13
d 9.92363 NA NA NA -2.603e-12 3.271e-14
We can calculate SE's using nls2 as a second stage but this still does not address the problem with the whole lthing that the singular values suggest.
summary(nls2(fo2, start = coef(fm2), algorithm = "brute-force"))
giving:
Formula: yy ~ g1 * (a1 * b^(xx - c1) + d) + g2 * (a2 * b^(xx - c2) + d) +
g3 * (a3 * b^(xx - c3) + d)
Parameters:
Estimate Std. Error t value Pr(>|t|)
a1 3.20e+00 5.38e+05 0.0 1
a2 6.42e-01 3.55e+05 0.0 1
a3 -1.03e+00 3.16e+05 0.0 1
b 2.00e+00 2.49e-03 803.4 <2e-16 ***
c1 2.66e+00 9.42e-02 28.2 <2e-16 ***
c2 3.34e+00 2.43e+05 0.0 1
c3 2.03e+00 8.00e+05 0.0 1
d 9.92e+00 4.42e+05 0.0 1
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.959 on 145 degrees of freedom
Number of iterations to convergence: 8
Achieved convergence tolerance: NA
Other
When nls has trouble fitting a model it often suggests that there is something wrong with the model itself. Playing around with it a bit, guided by the JSingval column in nlsr output above which suggests that c parameters or d might be the problem, we find that if we fix all c parameter values to 0 then the model is easy to fit given sufficiently good starting values and it still gives a low residual sum of squares.
library(nls2)
fo3 <- yy ~ cbind((g==1) * b ^ xx, (g==2) * b ^ xx, (g==3) * b ^ xx, 1)
st3 <- coef(fm0)["b"]
fm3 <- nls(fo3, start = st3, alg = "plinear")
giving:
Nonlinear regression model
model: yy ~ cbind((g == 1) * b^xx, (g == 2) * b^xx, (g == 3) * b^xx, 1)
data: parent.frame()
b .lin1 .lin2 .lin3 .lin4
1.9971 0.5071 0.0639 -0.2532 9.9236
residual sum-of-squares: 133
Number of iterations to convergence: 4
Achieved convergence tolerance: 1.67e-09
which the following anova indicates is comparable to fm from above despite having 3 fewer parameters:
anova(fm3, fm)
giving:
Analysis of Variance Table
Model 1: yy ~ cbind((g == 1) * b^xx, (g == 2) * b^xx, (g == 3) * b^xx, 1)
Model 2: yy ~ cbind((g == 1) * b^abs(xx - c[g]), (g == 2) * b^abs(xx - c[g]), (g == 3) * b^abs(xx - c[g]), 1)
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 148 134
2 145 133 3 0.385 0.14 0.94
We can redo fm3 using nlxb like this:
fo4 <- yy ~ g1 * (a1 * b ^ xx + d) +
g2 * (a2 * b ^ xx + d) +
g3 * (a3 * b ^ xx + d)
st4 <- list(a1 = 1, a2 = 1, a3 = 1, b = 1, d = 1)
fm4 <- nlxb(fo4, start = st4)
fm4
giving:
nlsr object: x
residual sumsquares = 133.45 on 153 observations
after 24 Jacobian and 33 function evaluations
name coeff SE tstat pval gradient JSingval
a1 0.507053 0.005515 91.94 1.83e-132 8.274e-08 5880
a2 0.0638554 0.0008735 73.11 4.774e-118 1.26e-08 2053
a3 -0.253225 0.002737 -92.54 7.154e-133 -4.181e-08 2053
b 1.99713 0.002294 870.6 2.073e-276 -2.55e-07 147.5
d 9.92363 0.09256 107.2 3.367e-142 -1.219e-11 10.26
Note
The assumed input below is the same as in the question except we additionally
set the seed to make it reproducible.
set.seed(123)
my_model <- function(x, a, b, c, d) a * b ^ (x - c) + d
x <- seq(0, 10, 0.2)
b <- 2; d <- 10 # shared
a1 <- 1; c1 <- 1
y1 <- my_model(x, a = a1, b = b, c = c1, d = d) + rnorm(length(x))
a2 <- 2; c2 <- 5
y2 <- my_model(x, a = a2, b = b, c = c2, d = d) + rnorm(length(x))
a3 <- -2; c3 <- 3
y3 <- my_model(x, a = a3, b = b, c = c3, d = d) + rnorm(length(x))
I'm not sure this is really the best way, but you could minimize the sum of the squared residuals using optim().
#start values
params <- c(a1=1, a2=1, a3=1, b=1, c1=1, c2=1, c3=1,d=1)
# minimize total sum of squares of residuals
fun <- function(p) {
sum(
(y1-my_model(x, p["a1"], p["b"], p["c1"], p["d"]))^2 +
(y2-my_model(x, p["a2"], p["b"], p["c2"], p["d"]))^2 +
(y3-my_model(x, p["a3"], p["b"], p["c3"], p["d"]))^2
)
}
out <- optim(params, fun, method="BFGS")
out$par
# a1 a2 a3 b c1 c2 c3
# 0.8807542 1.0241804 -2.8805848 1.9974615 0.7998103 4.0030597 3.5184600
# d
# 9.8764917
And we can add the plots on top of the image
curve(my_model(x, out$par["a1"], out$par["b"], out$par["c1"], out$par["d"]), col="red", add=T)
curve(my_model(x, out$par["a2"], out$par["b"], out$par["c2"], out$par["d"]), col="green", add=T)
curve(my_model(x, out$par["a3"], out$par["b"], out$par["c3"], out$par["d"]), col="blue", add=T)
I'm trying to solve a non-linear optimization problem with constraints. I wrote the code (see below) to minimize sum of squared errors subject to constraints. I need to find a[1] and a[2] so that they sum up to 1. I provide a vector of initial guesses, the code runs but in the end gives me nothing but again my initial guesses. What do I do wrong? Thank you.
MIS <- c(0.0156, 0.0087, 0.0468)
EDF <- c(0.0008, 0.0088, 0.0059)
QFM <- data.frame(Factor1 = c(100,100,50), Factor2 = c(50,25,100))
qt.mean <- mean(EDF)
qt.sd <- sd(EDF)
z.qt <- (qnorm(EDF) - qt.mean)/qt.sd
weight <- 0.7
alpha <- -2.7
beta <- 1.0
objfun <- function(a) {
RQL <- a[1] * QFM$Factor1 + a[2] * QFM$Factor2
z.ql <- (RQL - mean(RQL))/sd(RQL)
corr.factor <- cor(z.qt, z.ql)
denom <- sqrt(weight ^ 2 + (1 - weight)^2 + 2 * corr.factor * weight * (1- weight))
z.cs <- 1/denom * (weight * z.qt + (1-weight) * z.ql)
z.fs <- alpha + beta * z.cs
return(sum((MIS - pnorm(z.fs))^2))
}
eqn <- function (a) {sum(a)}
solnp(c(0.5,0.5), fun = objfun, eqfun = eqn, eqB = 1, LB = c(0,0), UB = c(1,1))
Iter: 1 fn: 0.002509 Pars: 0.50000 0.50000
solnp--> Completed in 1 iterations
$pars
[1] 0.5 0.5
$convergence
[1] 0
$values
[1] 0.00250929 0.00250929
$lagrange
[,1]
[1,] 0
$hessian
[,1] [,2]
[1,] 1 0
[2,] 0 1
$ineqx0
NULL
$nfuneval
[1] 35
$outer.iter
[1] 1
$elapsed
Time difference of 0.02330089 secs
$vscale
[1] 0.00250929 0.60000000 1.00000000 1.00000000
I am attempting to perform constrained optimization in R. I have looked at these posts and a couple of others:
constrained optimization in R
function constrained optimization in R
The first post above is quite helpful, but I am still not obtaining the correct answer to my problem.
My function is:
Fd <- 224 * d1 + 84 * d2 + d1 * d2 - 2 * d1^2 - d2^2
and my constraint is: 3 * d1 + d2 = 280
First I find the correct answer using an unconstrained exhaustive search followed by a constrained exhaustive search:
my.data <- expand.grid(x1 = seq(0, 200, 1), x2 = seq(0, 200, 1))
head(my.data)
dim(my.data)
d1 <- my.data[,1]
d2 <- my.data[,2]
Fd <- 224 * d1 + 84 * d2 + d1 * d2 - 2 * d1^2 - d2^2
new.data <- data.frame(Fd = Fd, d1 = d1, d2 = d2)
head(new.data)
# identify values of d1 and d2 that maximize Fd without the constraint
new.data[new.data$Fd == max(new.data$Fd),]
# **This is the correct answer**
# Fd d1 d2
# 6157 11872 76 80
# Impose constraint
new.data <- new.data[(3 * new.data$d1 + new.data$d2) == 280, ]
# identify values of d1 and d2 that maximize Fd with the constraint
new.data[new.data$Fd == max(new.data$Fd),]
# **This is the correct answer**
# Fd d1 d2
# 14743 11774 69 73
Now find unconstrained maxima using optim. This works.
Fd <- function(betas) {
b1 = betas[1]
b2 = betas[2]
(224 * b1 + 84 * b2 + b1 * b2 - 2 * b1^2 - b2^2)
}
# unconstrained
optim(c(60, 100), Fd, control=list(fnscale=-1), method = "BFGS", hessian = TRUE)
# $par
# [1] 75.99999 79.99995
Now find constrained maxima using constrOptim. This does not work.
b1.lower.bound <- c(0, 280)
b1.upper.bound <- c(93.33333, 0)
b2.lower.bound <- c(93.33333, 0)
b2.upper.bound <- c(0, 280)
theta = c(60,100) # starting values
ui = rbind(c(280,0), c(0,93.33333)) # range of allowable values
theta %*% ui # obtain ci as -1 * theta %*% ui
# [,1] [,2]
# [1,] 16800 9333.333
constrOptim(c(60,100), Fd, NULL, ui = rbind(c(280,0), c(0,93.33333)), ci = c(-16800, -9333.333), control=list(fnscale=-1))
# $par
# [1] 75.99951 80.00798
I have tried playing around with ui and ci, but it seems like no matter what values I use for them I always get the same answer as with unconstrained optim.
Thank you for any advice.
constrOptim() uses linear inequality constraints and defines the feasible region by ui %*% param - ci >= 0. If the constraint is 3 * d1 + d2 <= 280, ui is c(-3, -1) and ci is -280.
constrOptim(); inequality constraint is: 3 * d1 + d2 <= 280
Fd <- function(betas) {
b1 = betas[1]
b2 = betas[2]
(224 * b1 + 84 * b2 + b1 * b2 - 2 * b1^2 - b2^2)
}
theta = c(59.999,100) # because of needing " ui %*% inital_par - ci > 0 "
ui = c(-3, -1)
ci = -280 # those ui & ci mean " -3*par[1] + -1*par[2] + 280 >= 0 "
constrOptim(theta, Fd, NULL, ui = ui, ci = ci, control=list(fnscale=-1))
# $par
# [1] 69.00002 72.99993
[Edited]
If you want not inequality but equality constraints, it would be better to use Rsolnp or alabama package. They can use inequality and/or equality constraints (see Constrained Optimization library for equality and inequality constraints).
solnp(); auglag(); equality constraint is: 3 * d1 + d2 = 280
library(Rsolnp); library(alabama);
Fd2 <- function(betas) { # -1 * Fd
b1 = betas[1]
b2 = betas[2]
-1 * (224 * b1 + 84 * b2 + b1 * b2 - 2 * b1^2 - b2^2)
}
eqFd <- function(betas) { # the equality constraint
b1 = betas[1]
b2 = betas[2]
(3 * b1 + b2 -280)
}
solnp(pars = c(60, 100), fun = Fd2, eqfun = eqFd, eqB = 0)
auglag(par = c(60, 100), fn = Fd2, heq = eqFd)
Here I have implemented G. Grothendieck's suggestion and it seems to return the correct answer. Although, ideally I would like to learn how to obtain the correct answer using constrained optimization. I used the Brent method here because there is only one variable. Note that I had to provide upper and lower limits in the optim statement.
# Find maxima using optim and substitution. First remove b2
#
# 3 * b1 + b2 = 280
#
# b2 = (280 - 3 * b1)
Fd <- function(betas) {
b1 = betas[1]
(224 * b1 + 84 * (280 - 3 * b1) + b1 * (280 - 3 * b1) - 2 * b1^2 - (280 - 3 * b1)^2)
}
optim(c(60), Fd, method = "Brent", lower = 0, upper = 93.33333, control=list(fnscale=-1))
# $par
# [1] 69
# Now remove b1
#
# 3 * b1 + b2 = 280
#
# b1 = ((280 - b2) / 3)
Fd <- function(betas) {
b2 = betas[1]
(224 * ((280 - b2) / 3) + 84 * b2 + ((280 - b2) / 3) * b2 - 2 * ((280 - b2) / 3)^2 - b2^2)
}
optim(c(100), Fd, method = "Brent", lower = 0, upper = 280, control=list(fnscale=-1))
# $par
# [1] 73
I am trying to simulate cell uptake in R, having ported a model from Berkeley Madonna. The model is comprised of several constants and differential equations to calculate amounts and concentrations. A portion of the code is listed:
library(deSolve)
fb = 0.0510
Km = 23.5
Pdif = 0.429
Vmax = 270
Vol_cell = 9.33
Vol_media = 150
S = 10 #concentration of dosing media
yini = c(Amt_media=(S*Vol_media)-(S*fb*Vol_cell),
Amt_cell=S*fb*Vol_cell,
Amt_total=S*Vol_media,
Con_media=S-(S*fb),
Con_cell=S*fb)
Uptake = function(t, y, p){
dy1 = (- (Pdif * y[1]) + (Pdif * y[2]) - ((Vmax * y[4])/(Km + y[4])))
dy2 = (+ (Pdif * y[1]) - (Pdif * y[2]) + ((Vmax * y[4])/(Km + y[4])))
dy3 = dy1 + dy2
dy4 = dy1 / Vol_media
dy5 = dy2 / Vol_cell
list(c(dy1, dy2, dy3, dy4, dy5))}
times1 = seq(from=0, to=15, by=0.01)
out1 = ode(y=yini, times=times1, func=Uptake, parms=NULL, method="rk4")
The rest of the code is for output to dataframes and plotting. My question then is how to have the code structured to use "S" as a list of several concentrations such that each concentration can be applied to the differential equations (essentially giving me an out1 for S1, out2 for S2, etc, that can then be passed onto a dataframe)? In Berkeley Madonna this was achieved by writing over 35 differential equations, though I'd like to use a simplified approach in R if possible.
The only part where S is used is in the initialization of the yini values. Basically we just need to move that part and the part that runs ode with those values into a new function. Then you can call that function for what ever values you want. For example
#set up
library(deSolve)
fb <- 0.0510
Km <- 23.5
Pdif <- 0.429
Vmax <- 270
Vol_cell <- 9.33
Vol_media <- 150
Uptake <- function(t, y, p){
dy1 = (- (Pdif * y[1]) + (Pdif * y[2]) - ((Vmax * y[4])/(Km + y[4])))
dy2 = (+ (Pdif * y[1]) - (Pdif * y[2]) + ((Vmax * y[4])/(Km + y[4])))
dy3 = dy1 + dy2
dy4 = dy1 / Vol_media
dy5 = dy2 / Vol_cell
list(c(dy1, dy2, dy3, dy4, dy5))}
times1 <- seq(from=0, to=15, by=0.01)
# function with S as a parameter
runConc <- function(S) {
yini <- c(Amt_media=(S*Vol_media)-(S*fb*Vol_cell),
Amt_cell=S*fb*Vol_cell,
Amt_total=S*Vol_media,
Con_media=S-(S*fb),
Con_cell=S*fb)
ode(y=yini, times=times1, func=Uptake, parms=NULL, method="rk4")
}
#run for concentrations 10,20,30
out <- lapply(c(10,20,30), runConc)
This will result in a list object with the results for each concentration. So out[[1]] is the result for S=10, out[[2]] is S=20, etc. We can see the first few lines of each of the results with
lapply(out, head, 3)
# [[1]]
# time Amt_media Amt_cell Amt_total Con_media Con_cell
# [1,] 0.00 1495.242 4.75830 1500 9.490000 0.510000
# [2,] 0.01 1488.103 11.89710 1500 9.442408 1.275145
# [3,] 0.02 1481.028 18.97216 1500 9.395241 2.033457
#
# [[2]]
# time Amt_media Amt_cell Amt_total Con_media Con_cell
# [1,] 0.00 2990.483 9.51660 3000 18.98000 1.020000
# [2,] 0.01 2976.550 23.44980 3000 18.88711 2.513377
# [3,] 0.02 2962.739 37.26072 3000 18.79504 3.993646
#
# [[3]]
# time Amt_media Amt_cell Amt_total Con_media Con_cell
# [1,] 0.00 4485.725 14.27490 4500 28.47000 1.53000
# [2,] 0.01 4465.153 34.84653 4500 28.33286 3.73489
# [3,] 0.02 4444.761 55.23920 4500 28.19690 5.92060
I have been trying to solve a constrained optimization problem in R using constrOptim() (my first time) but am struggling to set up the constraints for my problem.
The problem is pretty straight forward and i can set up the function ok but am a bit at a loss about passing the constraints in.
e.g. problem i've defined is (am going to start with N fixed at 1000 say so i just want to solve for X ultimately i'd like to choose both N and X that max profit):
so i can set up the function as:
fun <- function(x, N, a, c, s) { ## a profit function
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
a1 <- a[1]
a2 <- a[2]
a3 <- a[3]
c1 <- c[1]
c2 <- c[2]
c3 <- c[3]
s1 <- s[1]
s2 <- s[2]
s3 <- s[3]
((N*x1*a1*s1)-(N*x1*c1))+((N*x2*a2*s2)-(N*x2*c2))+((N*x3*a3*s3)-(N*x3*c3))
}
The constraints i need to implement are that:
x1>=0.03
x1<=0.7
x2>=0.03
x2<=0.7
x3>=0.03
x2<=0.7
x1+x2+x3=1
The X here represents buckets into which i need to optimally allocate N, so x1=pecent of N to place in bucket 1 etc. with each bucket having at least 3% but no more than 70%.
Any help much appreciated...
e.g. here is an example i used to test the function does what i want:
fun <- function(x, N, a, c, s) { ## profit function
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
a1 <- a[1]
a2 <- a[2]
a3 <- a[3]
c1 <- c[1]
c2 <- c[2]
c3 <- c[3]
s1 <- s[1]
s2 <- s[2]
s3 <- s[3]
((N*x1*a1*s1)-(N*x1*c1))+((N*x2*a2*s2)-(N*x2*c2))+((N*x3*a3*s3)-(N*x3*c3))
};
x <-matrix(c(0.5,0.25,0.25));
a <-matrix(c(0.2,0.15,0.1));
s <-matrix(c(100,75,50));
c <-matrix(c(10,8,7));
N <- 1000;
fun(x,N,a,c,s);
You can use The lpSolveAPI package.
## problem constants
a <- c(0.2, 0.15, 0.1)
s <- c(100, 75, 50)
c <- c(10, 8, 7)
N <- 1000
## Problem formulation
# x1 >= 0.03
# x1 <= 0.7
# x2 >= 0.03
# x2 <= 0.7
# x3 >= 0.03
# x1 +x2 + x3 = 1
#N*(c1- a1*s1)* x1 + (a2*s2 - c2)* x2 + (a3*s3- c3)* x3
library(lpSolveAPI)
my.lp <- make.lp(6, 3)
The best way to build a model in lp solve is columnwise;
#constraints by columns
set.column(my.lp, 1, c(1, 1, 0, 0, 1, 1))
set.column(my.lp, 2, c(0, 0, 1, 1, 0, 1))
set.column(my.lp, 3, c(0, 0, 0, 0, 1, 1))
#the objective function ,since we need to max I set negtive max(f) = -min(f)
set.objfn (my.lp, -N*c(c[1]- a[1]*s[1], a[2]*s[2] - c[2],a[3]*s[3]- c[3]))
set.rhs(my.lp, c(rep(c(0.03,0.7),2),0.03,1))
#constraint types
set.constr.type(my.lp, c(rep(c(">=","<="), 2),">=","="))
take a look at my model
my.lp
Model name:
Model name:
C1 C2 C3
Minimize 10000 -3250 2000
R1 1 0 0 >= 0.03
R2 1 0 0 <= 0.7
R3 0 1 0 >= 0.03
R4 0 1 0 <= 0.7
R5 1 0 1 >= 0.03
R6 1 1 1 = 1
Kind Std Std Std
Type Real Real Real
Upper Inf Inf Inf
Lower 0 0 0
solve(my.lp)
[1] 0 ## sucess :)
get.objective(my.lp)
[1] -1435
get.constraints(my.lp)
[1] 0.70 0.70 0.03 0.03 0.97 1.00
## the decisions variables
get.variables(my.lp)
[1] 0.03 0.70 0.27
Hi Just in case of use to anyone i also found an answer as below:
First of all, your objective function can be written a lot more concisely using vector operations:
> my_obj_coeffs <- function(N,a,c,s) N*(a*s-c)
> fun <- function(x,N,a,c,s) sum(my_obj_coeffs(N,a,c,s) * x)
You're trying to solve a linear program, so you can use solve it using the simplex algorithm. There's a lightweight implementation of it in the 'boot' package.
> library(boot)
> solution <- function(obj) simplex(obj, diag(3), rep(0.7,3), diag(3), rep(0.03,3), rep(1,3), 1, maxi=TRUE)
Then for the example parameters you used, you can call that solution function:
> a <- c(0.2,0.15,0.1)
> s <- c(100,75,50)
> c <- c(10,8,7)
> N <- 1000
> solution(my_obj_coeffs(N,a,c,s))
Linear Programming Results
Call : simplex(a = obj(N, a, s, c), A1 = diag(3), b1 = rep(0.7, 3),
A2 = diag(3), b2 = rep(0.03, 3), A3 = matrix(1, 1, 3), b3 = 1,
maxi = TRUE)
Maximization Problem with Objective Function Coefficients
[,1]
[1,] 10000
[2,] 3250
[3,] -2000
attr(,"names")
[1] "x1" "x2" "x3"
Optimal solution has the following values
x1 x2 x3
0.70 0.27 0.03
The optimal value of the objective function is 7817.5.