Optimization of budget allocation in R (formerly Excel Solver) - r

i translated a Problem I had in Excel into R. I want to allocate a fixed Budget in a form that "Gesamt" (which is returned by the function) is maximized.
NrwGes <- function(Budget, Speed, maxnrw, cpcrp) {
BudgetA <- Budget[1]
BudgetB <- Budget[2]
BudgetC <- Budget[3]
BudgetD <- Budget[4]
BudgetE <- Budget[5]
MaxNRW <- c(90, 40, 40, 25, 15)
Speed <- c(0.9, 0.9, 0.9, 0.9, 0.9)
cpcrp <- c(6564, 4494, 3962, 4525, 4900)
TV <- BudgetA*1000/cpcrp[1]
Catchup <- BudgetB*1000/cpcrp[2]
YT <- BudgetC*1000/cpcrp[3]
FB <- BudgetD*1000/cpcrp[4]
Display <- BudgetE*1000/cpcrp[5]
a <- TV^Speed[1]/(1+abs((TV)^Speed[1]-1)/(MaxNRW[1]*0.98))
b <- Catchup^Speed[2]/(1+abs((Catchup)^Speed[2]-1)/(MaxNRW[2]*0.98))
c <- YT^Speed[3]/(1+abs((YT)^Speed[3] -1)/(MaxNRW[3]*0.98))
d <- FB^Speed[4]/(1+abs((FB)^Speed[4]-1)/(MaxNRW[4]*0.98))
e <- Display^Speed[5]/(1+abs((Display)^Speed[5]-1)/(MaxNRW[5]*0.93))
Gesamt <- a+(100-a)/100*b+((100-a)/100*(100-b)/100*c)+((100-a)/100*(100-b)/100*(100-c)/100*d)+((100-a)/100*(100-b)/100*(100-c)/100*(100-d)/100*e)
return(Gesamt)
}
I have a total Budget (i.e 5000), which can be allocated differently to maximize "Gesamt". Examples:
NrwGes(c(5000, 0, 0, 0, 0)) # 72.16038
NrwGes(c(2000, 1500, 1000, 500, 0)) # 84.23121
Brute Forcing or grid search is not an option since this will be done 15-20 times and the algorithm will be applied to an R-Shiny App.

Try optim with the L-BFGS-U method (which allows for bounds) and a lower bound of 0. Then project the input components onto a vector which sums to 5000 passing that to NrwGes. fscale = -1 says to maximize rather than minimize. The final allocation will be proj(res$par) as shown at the bottom. No packages are used.
proj <- function(x) 5000 * x / sum(x)
st <- proj(rep(1, 5))
f <- function(x) NrwGes(proj(x))
res <- optim(st, f, lower = 0 * st, method = "L-BFGS-B", control = list(fnscale = -1))
giving:
> res
$`par`
[1] 2107.8438 482.5702 468.9409 268.0808 142.4305
$value
[1] 86.64285
$counts
function gradient
14 14
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
> proj(res$par) # final allocation
[1] 3037.3561 695.3729 675.7334 386.2984 205.2391

An option is nloptr package :
library(nloptr)
# we use NLOPT_LN_COBYLA algorithm because it doesn't need gradient functions
opts <- list(algorithm="NLOPT_LN_COBYLA",
xtol_rel=1.0e-8,
maxeval=10000)
# objective function (negative because nloptr always minimize)
objFun <- function(x){ -NrwGes(x) }
# sum of budget <= 5000 (in the form g(x) <= 0)
g <- function(x){ sum(x) - 5000 }
res <- nloptr(x0=rep.int(0,5), # initial solution (all zeros)
eval_f=objFun,
lb=rep.int(0,5), # lowerbounds = 0
ub=rep.int(5000,5), # upperbounds = 5000
eval_g_ineq=g,
opts=opts)
Result :
> res
Call:
nloptr(x0 = rep.int(0, 5), eval_f = objFun, lb = rep.int(0, 5),
ub = rep.int(5000, 5), eval_g_ineq = g, opts = opts)
Minimization using NLopt version 2.4.2
NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel
or xtol_abs (above) was reached. )
Number of Iterations....: 261
Termination conditions: xtol_rel: 1e-08 maxeval: 10000
Number of inequality constraints: 1
Number of equality constraints: 0
Optimal value of objective function: -86.6428477187536
Optimal value of controls: 3037.382 695.3725 675.7232 386.2929 205.2291
N.B. you can access to solution, objective of res using res$solution, res$objective etc.

Related

Combinatorial optimization with discrete options in R

I have a function with five variables that I want to maximize using only an specific set of parameters for each variable.
Are there any methods in R that can do this, other than by brutal force? (e.g. Particle Swarm Optimization, Genetic Algorithm, Greedy, etc.). I have read a few packages but they seem to create their own set of parameters from within a given range. I am only interested in optimizing the set of options provided.
Here is a simplified version of the problem:
#Example of 5 variable function to optimize
Fn<-function(x){
a=x[1]
b=x[2]
c=x[3]
d=x[4]
e=x[5]
SUM=a+b+c+d+e
return(SUM)
}
#Parameters for variables to optimize
Vars=list(
As=c(seq(1.5,3, by = 0.3)), #float
Bs=c(1,2), #Binary
Cs=c(seq(1,60, by=10)), #Integer
Ds=c(seq(60,-60, length.out=5)), #Negtive
Es=c(1,2,3)
)
#Full combination
FullCombn= expand.grid(Vars)
Results=data.frame(I=as.numeric(), Sum=as.numeric())
for (i in 1:nrow(FullCombn)){
ParsI=FullCombn[i,]
ResultI=Fn(ParsI)
Results=rbind(Results,c(I=i,Sum=ResultI))
}
#Best iteration (Largest result)
Best=Results[Results[, 2] == max(Results[, 2]),]
#Best parameters
FullCombn[Best$I,]
Two more possibilities. Both minimize by default, so I flip the sign in your objective function (i.e. return -SUM).
#Example of 5 variable function to optimize
Fn<-function(x, ...){
a=x[1]
b=x[2]
c=x[3]
d=x[4]
e=x[5]
SUM=a+b+c+d+e
return(-SUM)
}
#Parameters for variables to optimize
Vars=list(
As=c(seq(1.5,3, by = 0.3)), #float
Bs=c(1,2), #Binary
Cs=c(seq(1,60, by=10)), #Integer
Ds=c(seq(60,-60, length.out=5)), #Negtive
Es=c(1,2,3)
)
First, a grid search. Exactly what you did, just convenient. And the implementation allows you to distribute the evaluations of the objective function.
library("NMOF")
gridSearch(fun = Fn,
levels = Vars)[c("minfun", "minlevels")]
## 5 variables with 6, 2, 6, 5, ... levels: 1080 function evaluations required.
## $minfun
## [1] -119
##
## $minlevels
## [1] 3 2 51 60 3
An alternative: a simple Local Search. You start with a valid initial guess, and then move randomly through possible feasible solutions. The key ingredient is the neighbourhood function. It picks one element randomly and then, again randomly, sets this element to one allowed value.
nb <- function(x, levels, ...) {
i <- sample(length(levels), 1)
x[i] <- sample(levels[[i]], 1)
x
}
(There would be better algorithms for neighbourhood functions; but this one is simple and so demonstrates the idea well.)
LSopt(Fn, list(x0 = c(1.8, 2, 11, 30, 2), ## a feasible initial solution
neighbour = nb,
nI = 200 ## iterations
),
levels = Vars)$xbest
## Local Search.
## ##...
## Best solution overall: -119
## [1] 3 2 51 60 3
(Disclosure: I am the maintainer of package NMOF, which provides functions gridSearch and LSopt.)
In response to the comment, a few remarks on Local Search and the neighbourhood function above (nb). Local Search, as implemented in
LSopt, will start with an arbitrary solution, and
then change that solution slightly. This new solution,
called a neighbour, will be compared (by its
objective-function value) to the old solution. If the new solution is
better, it becomes the current solution; otherwise it
is rejected and the old solution remains the current one.
Then the algorithm repeats, for a number of iterations.
So, in short, Local Search is not random sampling, but
a guided random-walk through the search space. It's
guided because only better solutions get accepted, worse one's get rejected. In this sense, LSopt will narrow down on good parameter values.
The implementation of the neighbourhood is not ideal
for two reasons. The first is that a solution may not
be changed at all, since I sample from feasible
values. But for a small set of possible values as here,
it might often happen that the same element is selected
again. However, for larger search spaces, this
inefficiency is typically negligible, since the
probability of sampling the same value becomes
smaller. Often so small, that the additional code for
testing if the solution has changed becomes more
expensive that the occasionally-wasted iteration.
A second thing could be improved, albeit through a more
complicated function. And again, for this small problem it does not matter. In the current neighbourhood, an
element is picked and then set to any feasible value.
But that means that changes from one solution to the
next might be large. Instead of picking any feasible values of the As,
in realistic problems it will often be better to pick a
value close to the current value. For example, when you are at 2.1, either move to 1.8 or 2.4, but not to 3.0. (This reasoning is only relevant, of course, if the variable in question is on a numeric or at least ordinal scale.)
Ultimately, what implementation works well can be
tested only empirically. Many more details are in this tutorial.
Here is one alternative implementation. A solution is now a vector of positions for the original values, e.g. if x[1] is 2, it "points" to 1.8, if x[2] is 2, it points to 1, and so on.
## precompute lengths of vectors in Vars
lens <- lengths(Vars)
nb2 <- function(x, lens, ...) {
i <- sample(length(lens), 1)
if (x[i] == 1L) {
x[i] <- 2
} else if (x[i] == lens[i]) {
x[i] <- lens[i] - 1
} else
x[i] <- x[i] + sample(c(1, -1), 1)
x
}
## the objective function now needs to map the
## indices in x back to the levels in Vars
Fn2 <- function(x, levels, ...){
y <- mapply(`[`, levels, x)
## => same as
## y <- numeric(length(x))
## y[1] <- Vars[[1]][x[1]]
## y[2] <- Vars[[2]][x[2]]
## ....
SUM <- sum(y)
return(-SUM)
}
xbest <- LSopt(Fn2,
list(x0 = c(1, 1, 1, 1, 1), ## an initial solution
neighbour = nb2,
nI = 200 ## iterations
),
levels = Vars,
lens = lens)$xbest
## Local Search.
## ....
## Best solution overall: -119
## map the solution back to the values
mapply(`[`, Vars, xbest)
## As Bs Cs Ds Es
## 3 2 51 60 3
Here is a genetic algorithm solution with package GA.
The key is to write a function decode enforcing the constraints, see the package vignette.
library(GA)
#> Loading required package: foreach
#> Loading required package: iterators
#> Package 'GA' version 3.2.2
#> Type 'citation("GA")' for citing this R package in publications.
#>
#> Attaching package: 'GA'
#> The following object is masked from 'package:utils':
#>
#> de
decode <- function(x) {
As <- Vars$As
Bs <- Vars$Bs
Cs <- Vars$Cs
Ds <- rev(Vars$Ds)
# fix real variable As
i <- findInterval(x[1], As)
if(x[1L] - As[i] < As[i + 1L] - x[1L])
x[1L] <- As[i]
else x[1L] <- As[i + 1L]
# fix binary variable Bs
if(x[2L] - Bs[1L] < Bs[2L] - x[2L])
x[2L] <- Bs[1L]
else x[2L] <- Bs[2L]
# fix integer variable Cs
i <- findInterval(x[3L], Cs)
if(x[3L] - Cs[i] < Cs[i + 1L] - x[3L])
x[3L] <- Cs[i]
else x[3L] <- Cs[i + 1L]
# fix integer variable Ds
i <- findInterval(x[4L], Ds)
if(x[4L] - Ds[i] < Ds[i + 1L] - x[4L])
x[4L] <- Ds[i]
else x[4L] <- Ds[i + 1L]
# fix the other, integer variable
x[5L] <- round(x[5L])
setNames(x , c("As", "Bs", "Cs", "Ds", "Es"))
}
Fn <- function(x){
x <- decode(x)
# a <- x[1]
# b <- x[2]
# c <- x[3]
# d <- x[4]
# e <- x[5]
# SUM <- a + b + c + d + e
SUM <- sum(x, na.rm = TRUE)
return(SUM)
}
#Parameters for variables to optimize
Vars <- list(
As = seq(1.5, 3, by = 0.3), # Float
Bs = c(1, 2), # Binary
Cs = seq(1, 60, by = 10), # Integer
Ds = seq(60, -60, length.out = 5), # Negative
Es = c(1, 2, 3)
)
res <- ga(type = "real-valued",
fitness = Fn,
lower = c(1.5, 1, 1, -60, 1),
upper = c(3, 2, 51, 60, 3),
popSize = 1000,
seed = 123)
summary(res)
#> ── Genetic Algorithm ───────────────────
#>
#> GA settings:
#> Type = real-valued
#> Population size = 1000
#> Number of generations = 100
#> Elitism = 50
#> Crossover probability = 0.8
#> Mutation probability = 0.1
#> Search domain =
#> x1 x2 x3 x4 x5
#> lower 1.5 1 1 -60 1
#> upper 3.0 2 51 60 3
#>
#> GA results:
#> Iterations = 100
#> Fitness function value = 119
#> Solutions =
#> x1 x2 x3 x4 x5
#> [1,] 2.854089 1.556080 46.11389 49.31045 2.532682
#> [2,] 2.869408 1.638266 46.12966 48.71106 2.559620
#> [3,] 2.865254 1.665405 46.21684 49.04667 2.528606
#> [4,] 2.866494 1.630416 46.12736 48.78017 2.530454
#> [5,] 2.860940 1.650015 46.31773 48.92642 2.521276
#> [6,] 2.851644 1.660358 46.09504 48.81425 2.525504
#> [7,] 2.855078 1.611837 46.13855 48.62022 2.575492
#> [8,] 2.857066 1.588893 46.15918 48.60505 2.588992
#> [9,] 2.862644 1.637806 46.20663 48.92781 2.579260
#> [10,] 2.861573 1.630762 46.23494 48.90927 2.555612
#> ...
#> [59,] 2.853788 1.640810 46.35649 48.87381 2.536682
#> [60,] 2.859090 1.658127 46.15508 48.85404 2.590679
apply(res#solution, 1, decode) |> t() |> unique()
#> As Bs Cs Ds Es
#> [1,] 3 2 51 60 3
Created on 2022-10-24 with reprex v2.0.2

Multiple integration with functions of variables in the limits

I need to numerically integrate the following:
I tried to use cubature and pracma but they don't seem to support functional integration limits. I found a attempt to use cubature by:
library(cubature)
integrand <- function(arg) {
x <- arg[1]
y <- arg[2]
z <- arg[3]
w <- arg[4]
v<- arg[5]
ff <- dnorm(x, 10,2)*dnorm(y, 10,2)*dnorm(z, 10,2)*dnorm(w, 10,2)* dnorm(v, 10,2)* (x+y+z+w+v<=52)
return(ff)
}
R <- cuhre(f = integrand,
lowerLimit=c(0,0,0,0,0),
upperLimit=c(20,20,20,20,20),
relTol = 1e-5, absTol= 1e-5)
But the author doesn't guarantee that it's correct to do it.
Is there a way to numerically integrate multiple integrals with functional limits in R?
The domain of integration is the canonical simplex scaled by the factor 42. To evaluate an integral on a simplex, use the SimplicialCubature package:
integrand <- function(arg) {
x <- arg[1]
y <- arg[2]
z <- arg[3]
w <- arg[4]
v <- arg[5]
dnorm(x, 10, 2) *
dnorm(y, 10, 2) *
dnorm(z, 10, 2) *
dnorm(w, 10, 2) *
dnorm(v, 10, 2)
}
library(SimplicialCubature)
Simplex <- 42 * CanonicalSimplex(5)
Here is the command to run:
adaptIntegrateSimplex(integrand, S = Simplex)
# $integral
# [1] 0.03252553
#
# $estAbsError
# [1] 0.3248119
#
# $functionEvaluations
# [1] 9792
#
# $returnCode
# [1] 1
#
# $message
# [1] "error: maxEvals exceeded - too many function evaluations"
The algorithm has reached the maximal number of function evaluations and the estimated absolute error is 0.3248119, while the estimated value of the integral is 0.03252553. This is a big error.
We can increase the maximum number of function evaluations allowed. Taking 1e6, the computation is a bit slow and we get:
adaptIntegrateSimplex(integrand, S = Simplex, maxEvals = 1e6)
# $integral
# [1] 0.03682535
#
# $estAbsError
# [1] 0.001004083
#
# $functionEvaluations
# [1] 999811
#
# $returnCode
# [1] 1
#
# $message
# [1] "error: maxEvals exceeded - too many function evaluations"
The estimated error has decreased to 0.001004083, quite better.
Note that we can approximate this integral by using simulations, because this integral is the measure of the simplex under a multivariate normal distribution:
library(mvtnorm)
Sigma <- 2^2 * diag(5)
Mean <- rep(10, 5)
set.seed(666)
sims <- rmvnorm(1e6, mean = Mean, sigma = Sigma)
f <- function(X){ # test whether 0 < x < 42, 0 < x + y < 42, 0 < x + y + z < 42, ...
all(X > 0 & cumsum(X) < 42)
}
mean(apply(sims, 1, f))
# 0.037083

Expected return and covariance from return time series

I’m trying to simulate the Matlab ewstats function here defined:
https://it.mathworks.com/help/finance/ewstats.html
The results given by Matlab are the following ones:
> ExpReturn = 1×2
0.1995 0.1002
> ExpCovariance = 2×2
0.0032 -0.0017
-0.0017 0.0010
I’m trying to replicate the example with the RiskPortfolios R package:
https://cran.r-project.org/web/packages/RiskPortfolios/RiskPortfolios.pdf
The R code I’m using is this one:
library(RiskPortfolios)
rets <- as.matrix(cbind(c(0.24, 0.15, 0.27, 0.14), c(0.08, 0.13, 0.06, 0.13)))
w <- 0.98
rets
w
meanEstimation(rets, control = list(type = 'ewma', lambda = w))
covEstimation(rets, control = list(type = 'ewma', lambda = w))
The mean estimation is the same of the one in the example, but the covariance matrix is different:
> rets
[,1] [,2]
[1,] 0.24 0.08
[2,] 0.15 0.13
[3,] 0.27 0.06
[4,] 0.14 0.13
> w
[1] 0.98
>
> meanEstimation(rets, control = list(type = 'ewma', lambda = w))
[1] 0.1995434 0.1002031
>
> covEstimation(rets, control = list(type = 'ewma', lambda = w))
[,1] [,2]
[1,] 0.007045044 -0.003857217
[2,] -0.003857217 0.002123827
Am I missing something?
Thanks
They give the same answer if type = "lw" is used:
round(covEstimation(rets, control = list(type = 'lw')), 4)
## 0.0032 -0.0017
## -0.0017 0.0010
They are using different algorithms. From the RiskPortfolio manual:
ewma ... See RiskMetrics (1996)
From the Matlab hlp page:
There is no relationship between ewstats function and the RiskMetrics® approach for determining the expected return and covariance from a return time series.
Unfortunately Matlab does not tell us which algorithm is used.
For those who eventually need an equivalent ewstats function in R, here the code I wrote:
ewstats <- function(RetSeries, DecayFactor=NULL, WindowLength=NULL){
#EWSTATS Expected return and covariance from return time series.
# Optional exponential weighting emphasizes more recent data.
#
# [ExpReturn, ExpCovariance, NumEffObs] = ewstats(RetSeries, ...
# DecayFactor, WindowLength)
#
# Inputs:
# RetSeries : NUMOBS by NASSETS matrix of equally spaced incremental
# return observations. The first row is the oldest observation, and the
# last row is the most recent.
#
# DecayFactor : Controls how much less each observation is weighted than its
# successor. The k'th observation back in time has weight DecayFactor^k.
# DecayFactor must lie in the range: 0 < DecayFactor <= 1.
# The default is DecayFactor = 1, which is the equally weighted linear
# moving average Model (BIS).
#
# WindowLength: The number of recent observations used in
# the computation. The default is all NUMOBS observations.
#
# Outputs:
# ExpReturn : 1 by NASSETS estimated expected returns.
#
# ExpCovariance : NASSETS by NASSETS estimated covariance matrix.
#
# NumEffObs: The number of effective observations is given by the formula:
# NumEffObs = (1-DecayFactor^WindowLength)/(1-DecayFactor). Smaller
# DecayFactors or WindowLengths emphasize recent data more strongly, but
# use less of the available data set.
#
# The standard deviations of the asset return processes are given by:
# STDVec = sqrt(diag(ECov)). The correlation matrix is :
# CorrMat = VarMat./( STDVec*STDVec' )
#
# See also MEAN, COV, COV2CORR.
NumObs <- dim(RetSeries)[1]
NumSeries <- dim(RetSeries)[2]
# size the series and the window
if (is.null(WindowLength)) {
WindowLength <- NumObs
}
if (is.null(DecayFactor)) {
DecayFactor = 1
}
if (DecayFactor <= 0 | DecayFactor > 1) {
stop('Must have 0< decay factor <= 1.')
}
if (WindowLength > NumObs){
stop(sprintf('Window Length #d must be <= number of observations #d',
WindowLength, NumObs))
}
# ------------------------------------------------------------------------
# size the data to the window
RetSeries <- RetSeries[NumObs-WindowLength+1:NumObs, ]
# Calculate decay coefficients
DecayPowers <- seq(WindowLength-1, 0, by = -1)
VarWts <- sqrt(DecayFactor)^DecayPowers
RetWts <- (DecayFactor)^DecayPowers
NEff = sum(RetWts) # number of equivalent values in computation
# Compute the exponentially weighted mean return
WtSeries <- matrix(rep(RetWts, times = NumSeries),
nrow = length(RetWts), ncol = NumSeries) * RetSeries
ERet <- colSums(WtSeries)/NEff;
# Subtract the weighted mean from the original Series
CenteredSeries <- RetSeries - matrix(rep(ERet, each = WindowLength),
nrow = WindowLength, ncol = length(ERet))
# Compute the weighted variance
WtSeries <- matrix(rep(VarWts, times = NumSeries),
nrow = length(VarWts), ncol = NumSeries) * CenteredSeries
ECov <- t(WtSeries) %*% WtSeries / NEff
list(ExpReturn = ERet, ExpCovariance = ECov, NumEffObs = NEff)
}

Constraint Optimization with one parameter included in the constraint of the other

I want to calculate the following
So I want to find Theta and Sigma that maximizes the function.
The constraints are:
> Theta>-Sigma
> -1<Sigma<1
So one of my problem is that I dont know how to deal with the fact that one parameter is included in the constraint of the other Parameter, that I want to optimize over.
I tried with optim(), constrOptim and dfoptim!
Using optim():
k=8
i=1:(k-1)
x=c(5,0.2)
n=24
nj=c(3,4,8,1,1,4,2,1)
EPPF <- function(x,n,nj) {
y=(x[1]+1):(x[1]+1+(n-1)-1)
z=-(prod(x[1]+i*x[2])/(prod(y))*prod(sapply(nj, hfun)))
return(z)}
hfun <- function(p){
h=(1-x[2]):((1-x[2])+p-1)
hfun=prod(h)
return(hfun)
}
> optim(c(6,0.3), fn=EPPF,method = "L-BFGS-B", n=n,nj=nj, lower = c(-x[1],-1), upper = c(Inf,1))
$par
[1] 6.0 0.3
$value
[1] -1.258458e-15
$counts
function gradient
2 2
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
I also tried using a constructor function:
make.EPPF <- function(n,nj,fixed=c(FALSE,FALSE)){
params <-fixed
function(p) {
hfun <- function(y){
h=(1-sigma):((1-sigma)+y-1)
hfun=prod(h)
return(hfun)
}
params[!fixed] <- p
theta <- params[1]
sigma <- params[2]
y=(theta+1):(theta+1+(n-1)-1)
z=(prod(theta+i*sigma)/(prod(y))*prod(sapply(nj, hfun)))
z
}
}
EPPF <-make.EPPF (n,nj)
> optim(c(theta=6, sigma=0.5), fn=EPPF,method = "L-BFGS-B",lower = c(-sigma,-1), upper = c(Inf,1))
Error in optim(c(theta = 6, sigma = 0.5), fn = EPPF, method = "L-BFGS-B", :
object 'sigma' not found
Using constrOptim():
> A <- matrix(c(1,1,0,1,0,-1),3,2,byrow=T)
> b <- c(0,-1,-1)
>
> constrOptim(c(3,0.3),EPPF,NULL,A,b, control=list(fnscale=-1))
$par
[1] 3.0 0.3
$value
[1] 9.712117e-16
$counts
[1] 0
$convergence
[1] 0
$message
NULL
$outer.iterations
[1] 1
$barrier.value
[1] 7.313452e-05
Using Package dfoptim:
> library(dfoptim)
> nmkb(x=c(6,0.3), EPPF, lower=c(-x[2],-1), upper=c(Inf, 1 ))
Error in par < lower :
comparison (3) is possible only for atomic and list types
Either there is for some reasons no convergence or some other Errors.
I am relativ new to programming and R and would really appreciate if someone could help me. Thanks!
These are 3 linear inequality constraints:
sigma + theta > 0
sigma + 1 > 0
-sigma + 1 > 0
You can do this in maxLik. But note that maxLik maximizes the function, hence remove the '-' in front of z. Here is the code that works for me (using Rscript):
k=8
i=1:(k-1)
x=c(5,0.2)
n=24
nj=c(3,4,8,1,1,4,2,1)
EPPF <- function(x,n,nj) {
theta <- x[1]
sigma <- x[2]
y=(x[1]+1):(x[1]+1+(n-1)-1)
z <- prod(x[1]+i*x[2])/(prod(y))*prod(sapply(nj, hfun))
z <- log(z)
return(z)
}
hfun <- function(p){
h=(1-x[2]):((1-x[2])+p-1)
hfun=prod(h)
return(hfun)
}
library(maxLik)
constraints <- list(ineqA=matrix(c(1,0,0,1,1,-1),3,2), ineqB=c(0,1,1))
m <- maxBFGS(EPPF, start=c(6,0.3), constraints=constraints, n=n, nj=nj)
print(summary(m))
I also took logarithm of the result as this leads to more "human" numbers. Otherwise you have to re-tune the stopping conditions. The answer seems to be -1, 1.

How to write lp object to lp file?

I have been using lpSolve and lpSolveAPI. I build my constraint matrix, objective function etc and feed to the lp function and this works just fine. I want to save the problem as an lp file using write.lp and am having trouble. I keep getting an error telling me that the object is not an lp object. Any ideas?
> x1 = lp(direction = "min", cost, A , ">=",r,,3:13, , , ,FALSE)
> class(x1)
[1] "lp"
>write.lp(x1, filename, type = "lp",use.names = c(TRUE, TRUE))
Error in write.lp(x1, filename, type = "lp", use.names = c(TRUE, TRUE)) :
the lp argument does not appear to be a valid linear program record
I don't think you can mix between these two packages (lpSolveAPI doesn't import or depend on lpSolve). Consider a simple LP in lpSolve:
library(lpSolve)
costs <- c(1, 2)
mat <- diag(2)
dirs <- rep(">=", 2)
rhs <- c(1, 1)
x1 = lp("min", costs, mat, dirs, rhs)
x1
# Success: the objective function is 3
Based on the project website for lpSolveAPI, you do the same thing with something like:
library(lpSolveAPI)
x2 = make.lp(0, ncol(mat))
set.objfn(x2, costs)
for (idx in 1:nrow(mat)) {
add.constraint(x2, mat[idx,], dirs[idx], rhs[idx])
}
Now, we can solve and observe the solution:
x2
# Model name:
# C1 C2
# Minimize 1 2
# R1 1 0 >= 1
# R2 0 1 >= 1
# Kind Std Std
# Type Real Real
# Upper Inf Inf
# Lower 0 0
solve(x2)
# [1] 0
get.objective(x2)
# [1] 3
get.variables(x2)
# [1] 1 1
Getting back to the question, we can now write it out to a file:
write.lp(x2, "myfile.lp")
Here's the contents of the file:
/* Objective function */
min: +C1 +2 C2;
/* Constraints */
R1: +C1 >= 1;
R2: +C2 >= 1;

Categories

Resources