Constrain Optimisation Problems in R - r

I am trying to set up an optimisation script that will look at a set of models, fit curves to the models and then optimise across them, subject to a few parameters.
Essentially, I have revenue as a function of cost, in a diminishing function, and I have this for multiple portfolios, say 4 or 5. As an input, I have cost and revenue figures, at set increments. What I want to do is fit a curve to the portfolio of the form Revenue=A*cost^B, and then optimise across the different portfolios to find the optimal cost split between each portfolio for a set budget.
The code below (I apologise for the inelegance of it, I'm sure there are MANY improvements to be made!) essentially reads in my data, in this case, a simulation, creates the necessary data frames (this is likely where my inelegance comes in), calculates the necessary variables for the curves for each simulation and produces graphics to check the fitted curve to the data.
My problem is that now I have 5 curves of the form:
revenue = A * Cost ^ B (different A, B and cost for each function)
And I want to know, given the 5 variables, how should I split my cost between them, so I want to optimise the sum of the 5 curves subject to
Cost <= Budget
I know that I need to use constrOptim, but I have spent literally hours banging my head against my desk (literally hours, not literally banging my head...) and I still can't figure out how to set up the function so that it maximises revenue, subject to the cost constraint...
Any help here would be greatly appreciated, this has been bugging me for weeks.
Thanks!
Rich
## clear all previous data
rm(list=ls())
detach()
objects()
library(base)
library(stats)
## read in data
sim<-read.table("input19072011.txt",header=TRUE)
sim2<-data.frame(sim$Wrevenue,sim$Cost)
## identify how many simulations there are - here you can change the 20 to the number of steps but all simulations must have the same number of steps
portfolios<-(length(sim2$sim.Cost)/20)
## create a matrix to input the variables into
a<-rep(1,portfolios)
b<-rep(2,portfolios)
matrix<-data.frame(a,b)
## create dummy vector to hold the revenue predictions
k<-1
j<-20
for(i in 1:portfolios){
test<-sim2[k:j,]
rev9<-test[,1]
cost9<-test[,2]
ds<-data.frame(rev9,cost9)
rhs<-function(cost, b0, b1){
b0 * cost^b1
m<- nls(rev9 ~ rhs(cost9, intercept, power), data = ds, start = list(intercept = 5,power = 1))
matrix[i,1]<-summary(m)$coefficients[1]
matrix[i,2]<-summary(m)$coefficients[2]
k<-k+20
j<-j+20
}
## now there exists a matrix of all of the variables for the curves to optimise
matrix
multiples<-matrix[,1]
powers<-matrix[,2]
coststarts<-rep(0,portfolios)
## check accuracy of curves
k<-1
j<-20
for(i in 1:portfolios){
dev.new()
plot(sim$Wrevenue[k:j])
lines(multiples[i]*(sim$Cost[k:j]^powers[i]))
k<-k+20
j<-j+20
}

If you want to find
the values cost[1],...,cost[5]
that maximize revenue[1]+...+revenue[5]
subject to the constraints cost[1]+...+cost[5]<=budget
(and 0 <= cost[i] <= budget),
you can parametrize the set of feasible solutions
as follows
cost[1] = s(x[1]) * budget
cost[2] = s(x[2]) * ( budget - cost[1] )
cost[3] = s(x[3]) * ( budget - cost[1] - cost[2])
cost[4] = s(x[4]) * ( budget - cost[1] - cost[2] - cost[3] )
cost[5] = budget - cost[1] - cost[2] - cost[3] - cost[4]
where x[1],...,x[4] are the parameters to find
(with no constraints on them)
and s is any bijection between the real line R and the segment (0,1).
# Sample data
a <- rlnorm(5)
b <- rlnorm(5)
budget <- rlnorm(1)
# Reparametrization
s <- function(x) exp(x) / ( 1 + exp(x) )
cost <- function(x) {
cost <- rep(NA,5)
cost[1] = s(x[1]) * budget
cost[2] = s(x[2]) * ( budget - cost[1] )
cost[3] = s(x[3]) * ( budget - cost[1] - cost[2])
cost[4] = s(x[4]) * ( budget - cost[1] - cost[2] - cost[3] )
cost[5] = budget - cost[1] - cost[2] - cost[3] - cost[4]
cost
}
# Function to maximize
f <- function(x) {
result <- sum( a * cost(x) ^ b )
cat( result, "\n" )
result
}
# Optimization
r <- optim(c(0,0,0,0), f, control=list(fnscale=-1))
cost(r$par)

Related

please help me find where is error? R language

I'm writing an SI Model using the deSolve package for R.The model describes the transmission of an infection within a community and then allows the introduction of external events - which represent the mass treatment of the whole community. The final graph should show the three curves of "Susceptible", "Infected", and "Total". But the output chart is wrong. I don't know where the problem is. the debug could run without any error pops up. there is no Infected and total line in the graph, and susceptible is wrong too. the first image is How the code runs out. but what I need should be like the second image.
and
rm(list=ls())
require(deSolve)
odeequations=function(t,y,pars) {
S=y[1]; In=y[2];
beta=pars[1];
dS= -beta*S*In;
dIn=beta*S*In;
return(list(c(dS,dIn)));
}
S0=1;
I0=100000;
Y0=c(S0, I0);
tmax=70;
dt=1;
timevec=seq(0,tmax,dt);
beta=1.5;
pars=c(beta);
odeoutput=lsoda(Y0,timevec,odeequations,parms=pars, atol=1e-7,rtol=1e-7);
## windows(width=10,height=10)
plot(odeoutput[,1],odeoutput[,2],type="l",xlab="time (years)",ylab="",col="green",lwd=2,log="",xlim=c(0,tmax),ylim=c(1,max(odeoutput[,2])),main="Outbreak Time Series")
lines(odeoutput[,1],odeoutput[,3],type="l",col="red",lwd=2)
lines(odeoutput[,1],odeoutput[,2]+odeoutput[,3], type="l",col="blue",lwd=2)
legend("right", c("Susceptible","Infected","Total"),col = c("green","red","blue"),lwd=2)
A general SIR model with deSolve is found at different places, for example in the following paper: https://doi.org/10.18637/jss.v033.i03
A SEIR model (i.e. with an additional state variable) can be formulated as follows:
library("deSolve")
SEIR <- function(t, y, parms) {
with(as.list(c(parms, y)), {
dS <- -rho * beta * I * S
dE <- rho * beta * S * I - alpha * E
dI <- alpha * E - gamma * I
dR <- gamma * I
list(c(dS, dE, dI, dR))
})
}
# state variables: fractions of total population
y0 <- c(S=1 - 5e-4, # susceptible
E=4e-4, # exposed
I=1e-4, # infected
R=0) # recovered or deceased
parms <- c(alpha = 0.2, # inverse of incubation period (5 days)
beta = 1.75, # average contact rate
gamma = 0.5, # inverse of mean infectious period (2 days)
rho = 1) # social distancing factor (0 ... 1)
# time in days
times <- seq(0, 150, 1)
# numerical integration
out <- ode(y0, times, SEIR, parms, method="bdf", atol=1e-8, rtol=1e-8)
matplot.0D(out)
This example and several links to other versions can be found at https://github.com/tpetzoldt/covid
There are two errors in your code.
you have S0 and I0 switched (i.e., try setting S0 <- 1e5; I0 <- 1 rather than vice versa)
your beta value is way too high; try beta <- 1.5/1e5 (i.e., scale by population size)

R fit function returning only starting number

I am automating my trading in R. I am trying to use the nls and fit function to optimise my formula, however only get returned the initial starting parameter which I enter. Instead of using trial and error I am trying to find a way to use a function to be returned the optimal value for my strategy.
I have tried entering various values for the variables "a" and "b" however only get returned the starting values I enter and no optimisation is taking place. I am not sure if I am using the wrong function or if there is a more appropriate one I should be using. The code below shows what I have tried, the variable values (given by the model, not the ones I am trying to optimise have been generated randomly as I do not know how to get the market data uploaded into this question post.
# VARIABLES
x <- 1:1000 # number instead of date
y <- round((runif(1000, min=0, max=50)), digits=2) # highest price of the day minus the opening price of the day
z <- round((runif(1000, min=0.001, max=0.040)), digits=6) # implied volatility for the day
w <- sample(2000:2800, 1000, replace=TRUE) # opening price for the day
# FORMULA
# OPEN PRICE OF THE DAY - MULTIPLIED - BY IMPLIED VOLATILITY FOR THE DAY = (APPROXIMATLY) HIGHEST PRICE OF THE DAY - MINUS - OPEN PRICE FOR THE DAY
( w * (1 + z)) - w = y
# OPTMISED FORMULA FORMAT
(( w * ((1 + z) * a)) * b) - w = y # ATTEMPTING TO OPTMISE MY FORMULA TO IMPROVE THE ACCURACY OF RESULT FOR EXPECTED HIGH (y)
# TRYING WITH STARTING VALUES
a <- 0.000001
b <- 0.000001
# USING nls function and fit
m<-nls( y~ (( w * ((1 + z) * a)) - w)) + b
# OR
m<-nls( y~(( w * ((1 + z) * a)) * b)) - w
I am trying to get the values of the variables "a" and "b" which best suit either version of my formulas, so that the expected high approximates the realised high better. Thanks in advance for any help you guys might be able to offer.
It is difficult to understand your function to be optimized. Try something like this
m<-nls( y~ w * (1 + z) * a - b* w,start=list(a=a,b=b))
m
> m
Nonlinear regression model
model: y ~ w * (1 + z) * a - b * w
data: parent.frame()
a b
0.0089771 -0.0008416
residual sum-of-squares: 221244
Number of iterations to convergence: 1
Achieved convergence tolerance: 1.944e-07
> coef(m)
a b
0.0089771178 -0.0008416359

How to solve a portfolio optimization with a generalised objective function?

I have a portfolio of 5 stocks for which I want to find an optimal mix of minimizing portfolio variance and maximizing expected future dividends. The latter is from analysts forecasts. My problem is that I know how to solve an minimum variance problem but I am not sure how to put the quadratic form into the right matrix form for the objective function of quadprog.
The standard minimum variance problem reads
Min! ( portfolio volatility )
wherer has the 252 daily returns of the five stocks,d has the expected yearly dividend yields ( where firm_A pays 1 %, firm_B pays 2 % etc, )
and I have programmed it as follows
dat = rep( rnorm( 10, mean = 0, sd = 1 ), 252*5 )
r = matrix( dat, nr = 252, nc = 5 )
d = matrix( c( 1, 2, 1, 2, 2 ) )
library(quadprog)
# Dmat (covariance) and dvec (penalized returns) are generated easily
risk.param = 0.5
Dmat = cov(r)
Dmat[is.na(Dmat)]=0
dvec = matrix(colMeans(r) * risk.param)
dvec[is.na(dvec)]=1e-5
# The weights sum up to 1
n = 5
A = matrix( rep( 1, n ), nr = n )
b = 1
meq = 1
res = solve.QP( Dmat, dvec, A, b, meq = 1 )
Obviously, the returns in r a standard normal, hence each stocks gets about 20% weight.
Q1: How can I account for the fact that firm_A pays a dividend of 1, firm_B a dividend of 2, etc?
The new objective function reads:
Max! ( 0.5 * Portfolio_div - 0.5 * Portfolio_variance )
but I don't know how to hard-code it. The portfolio variance was easy to put into Dmat but the new objective function has the Portfolio_div element defined as Portfolio_div = w * d where w has the five weights.
Thanks a lot.
EDIT: Maybe it makes sense to add a higher-level description of the problem:
I am able to use a minimum-variance optimization with the code above. Minimizing the portfolio variance means optimizing the weights on the variace-covariance matrix Dmat (of dimension 5x5). However, I want to add an additional part to the optimization, which are the dividends in d multiplied with the weights (hence of dimension 5x1). The same weights are also used for Dmat.
Q2: How can I add the vector d to the code?
EDIT2: I guess the answer is to simply use
dvec = -1/d
as I maximize expected dividends by minimizing the inverse of the negative.
Q3: Could someone please tell me if that's right?
Opening a can of worms:
TLDR While I respect great work Harry MARKOWITZ ( 1990 Nobel prize ) has performed, I appreciate much more his wonderfull CACI Simulations spin-off deterministic simulation framework COMET III, than the Portfolio theory assumption, that variance per-se is the ruling minimiser driver for the portfolio optimisation process.
Driving this principal point of view ( which still may meet a bit ill-formed motivation of big funds,that live happily from their 2-by-20 feesdue to the nature and scale of "their" skewed perspective of perception of what are direct losses,which they recognise as a non-acquired hefty & risk-free management feesassociated with a crowd-panic churn attributed AUM erosion,rather than the real profits & losses, gained from their (in)ability to deliver any above average AUM returns ) further,closer to your ideathe problem is in the proper formulation of the { penalty | utility } function.
While variance is taken in classical efficient frontier theory as a penalty factor, operated in a min! global search, it has not much to do with real profit generation. You get penalised even for positive-side variance components, which is a nonsense per-se.
On the contrary, the dividend is a direct benefit, an absolute utility, entering the max! optimisation process.
So the first step in Q3 & Q1 ought be a design of a consistent utility function isolated from relative, revenue un-related factors, but containing all other absolute factors -- a cost of entry, transaction costs, rebalancing costs -- as otherwise your utility model would be misleading your portfolio wealth management strategy.
A2: Without this a-priori designed property, no one may claim a model is worth a single CPU-hour to even start the model's global optimisation efforts.

Accuracy is different between "tune" and "predict" in R

I am making SVM which will differentiate beforehand and afterward of track maintenance using on-board accelerometer on train car. There is focused section and I extracted the acceleration data corresponding to that section. Each run should take around 3 minutes to pass that section, so considering that sampling rate of accelerometer is around 1600/s, there are around 3min * 60sec * 1600/s = 288,000 records of acceleration data for each run. Then I calculate variance, maximum, minimum, mean, standard deviation and most frequent value of those acceleration records of each run. There are around 250 runs, so I made the dataset of those calculated value of 250 runs. Then also I put classification of beforehand and afterward of track maintenance, depending on the maintenance record and the date of each run.
Using this, I tried to make SVM as I mentioned. At first, I tried to find optimal parameters of gamma and cost in gaussian kernel function, so I used "tune" to do grid search. Then I got the result as follow:
> source("grid_search.R")
[gamma = 1 , cost = 10 ]
- best parameters:
gamma = 1.584893 ; cost = 25.11886 ;
accuracy: 88.54935 %
Also "grid_search.R" is as follow:
gamma <- 10^(0.0)
cost <- 10^(1.0)
gammaRange <- 10^seq(log10(gamma)-1,log10(gamma)+1,length=11)[2:10]
costRange <- 10^seq(log10(cost)-1 ,log10(cost)+1 ,length=11)[2:10]
t <- tune.svm(Category ~ ., data = X, gamma=gammaRange, cost=costRange,
tunecontrol = tune.control(sampling="cross", cross=8))
cat("[gamma =", gamma, ", cost =" , cost , "]\n")
cat("- best parameters:\n")
cat("gamma =", t$best.parameters$gamma, "; cost =", t$best.parameters$cost, ";\n")
cat("accuracy:", 100 - t$best.performance * 100, "%\n\n")
plot(t, transform.x=log10, transform.y=log10, zlim=c(0,0.1))
After that, using "svm" with the parameter "gamma = 1.584893 ; cost = 25.11886 ;" found as above, I trained SVM and tried to predict to classify the data which is used for training SVM as following:
gamma = 1.584893 ; cost = 25.11886;
model <- svm(Category ~ ., data = X, gamma=gamma, cost=cost)
pred <- predict(model, X)
table(pred, X[,13])
And I got result as following matrix:
pred after before
after 47 2
before 1 185
My question is: Depending on the matrix above, accuracy can be said as
1 - (1 + 2)/(47 + 2 + 1 + 185) = 0.987234 (98.7%)
But I also got "accuracy: 88.54935 %" as a result of "tune" when I got optimal parameters such as "gamma = 1.584893 ; cost = 25.11886;".

Errors when attempting constrained optimisation using optim()

I have been using the Excel solver to handle the following problem
solve for a b and c in the equation:
y = a*b*c*x/((1 - c*x)(1 - c*x + b*c*x))
subject to the constraints
0 < a < 100
0 < b < 100
0 < c < 100
f(x[1]) < 10
f(x[2]) > 20
f(x[3]) < 40
where I have about 10 (x,y) value pairs. I minimize the sum of abs(y - f(x)). And I can constrain both the coefficients and the range of values for the result of my function at each x.
I tried nls (without trying to impose the constraints) and while Excel provided estimates for almost any starting values I cared to provide, nls almost never returned an answer.
I switched to using optim, but I'm having trouble applying the constraints.
This is where I have gotten so far-
best = function(p,x,y){sum(abs(y - p[1]*p[2]*p[3]*x/((1 - p[3]*x)*(1 - p[3]*x + p[2]*p[3]*x))))}
p = c(1,1,1)
x = c(.1,.5,.9)
y = c(5,26,35)
optim(p,best,x=x,y=y)
I did this to add the first set of constraints-
optim(p,best,x=x,y=y,method="L-BFGS-B",lower=c(0,0,0),upper=c(100,100,100))
I get the error ""ERROR: ABNORMAL_TERMINATION_IN_LNSRCH"
and end up with a higher value of the error ($value). So it seems like I am doing something wrong. I couldn't figure out how to apply my other set of constraints at all.
Could someone provide me a basic idea how to solve this problem that a non-statistician can understand? I looked at a lot of posts and looked in a few R books. The R books stopped at the simplest use of optim.
The absolute value introduces a singularity:
you may want to use a square instead,
especially for gradient-based methods (such as L-BFGS).
The denominator of your function can be zero.
The fact that the parameters appear in products
and that you allow them to be (arbitrarily close to) zero
can also cause problems.
You can try with other optimizers
(complete list on the optimization task view),
until you find one for which the optimization converges.
x0 <- c(.1,.5,.9)
y0 <- c(5,26,35)
p <- c(1,1,1)
lower <- 0*p
upper <- 100 + lower
f <- function(p,x=x0,y=y0) sum(
(
y - p[1]*p[2]*p[3]*x / ( (1 - p[3]*x)*(1 - p[3]*x + p[2]*p[3]*x) )
)^2
)
library(dfoptim)
nmkb(p, f, lower=lower, upper=upper) # Converges
library(Rvmmin)
Rvmmin(p, f, lower=lower, upper=upper) # Does not converge
library(DEoptim)
DEoptim(f, lower, upper) # Does not converge
library(NMOF)
PSopt(f, list(min=lower, max=upper))[c("xbest", "OFvalue")] # Does not really converge
DEopt(f, list(min=lower, max=upper))[c("xbest", "OFvalue")] # Does not really converge
library(minqa)
bobyqa(p, f, lower, upper) # Does not really converge
As a last resort, you can always use a grid search.
library(NMOF)
r <- gridSearch( f,
lapply(seq_along(p), function(i) seq(lower[i],upper[i],length=200))
)

Resources