how can I set up this equation for constrained maximization? - r

How i can write this equation inside R as a function?
subject to: 20* x1 + 170*x2 = 20000
#ATTEMPT
library(Rsolnp)
fn <- function(h, s){
z=200 * x[1]^(2/3) * x[2]^(1/3)
return(-z)}
# constraint z1: 20*x+170*y=20000
eqn <- function(x) {
z1=20*x[1] + 170*x[2]
return(c(z1))
}
constraints = c(20000)
x0 <- c(1, 1) # setup init values
sol1 <- solnp(x0, fun = fn, eqfun = eqn, eqB = constraints)
sol1$pars

In R, we would use the keyword function, and we would pass the necessary parameters:
for example in this case.
R <- function(h, s)200 * h^(2/3) * s^(1/3)
We now have a function called R, that takes in arguments h and s and gives us an output.
For example, we could do:
R(27, 8)

Related

I include the 'time' paramter in my differential equation solver and it messes it up somehow

library(deSolve)
require(deSolve)
delta_conc <- function(time, current_state, params) {
with(as.list(c(current_state, params)),{
dX <- Y
dY <- X - X^3 - 0.25*Y + A * sin(t)
return(list(c(dX, dY)))
})
}
params <- c(
A <- 0.2645
)
initial_state <- c(
X <- 0.9,
Y <- 0.4
)
times <- 1:10
model <- ode(initial_state, times, delta_conc, params)
summary(model)
matplot(model, type="l",lty=1, main="Enzyme model", xlab="Time")
I get this error message when I try to run it:
Error in checkFunc(Func2, times, y, rho) :
The number of derivatives returned by func() (21) must equal the length of the initial conditions vector (2)
When I exclude the 'sin(t)' part it works, so the problem is with that part, but I'm very much a beginner so I have no idea how to approach this problem
You should consistently use einer t or time for the actual time step. In your case t is not defined as variable, so tis interpreted as transpose-function.
The following should work:
require(deSolve)
delta_conc <- function(time, current_state, params) {
with(as.list(c(current_state, params)),{
dX <- Y
dY <- X - X^3 - 0.25*Y + A * sin(time)
return(list(c(dX, dY)))
})
}
params <- c(
A = 0.2645
)
initial_state <- c(
X = 0.9,
Y = 0.4
)
times <- 1:10
model <- ode(initial_state, times, delta_conc, params)
summary(model)
matplot.0D(model, type="l",lty=1, main="Enzyme model", xlab="Time")
In addition, the code had also some other issues:
use either require or library and not both
use = within c(). It is parameter matching and not assignment
Two additional suggestions:
you can use the deSolve-built in plot function matplot.0D
I would recommend to use times <- seq(0, 10, length.out = 100) instead of 1:10. This way the plot will get smooth. Starting time with 1 (or another value) may be ok, but is often more convenient to start time with zero.

deSolve ODE Integration Error, am I using the wrong function?

I'm attempting to solve a set of equations related to biological processes. One equation (of about 5) is for a pharmacokinetic (PK) curve of the form C = Co(exp(k1*t)-exp(k2*t). The need is to simultaneously solve the derivative of this equation along with some enzyme binding equations and initial results where not as expected. After troubleshooting, realized that the PK derivative doesn't numerically integrate by itself, if k is negative using the desolve ode function. I've attempted every method (lsode, lsoda, etc) in the ode function, with no success. I've tried adjusting rtol, it doesn't resolve.
Is there an alternative to the deSolve ode function I should investigate? Or another way to get at this problem?
Below is the code with a simplified equation to demonstrate the problem.
When k is negative, the integrated solution does not match the analytical result.
When k is positive, results are as expected.
First Image, result with k=0.2: Analytical and Integrated results match when k is positive
Second Image, result with k=-0.2: Integrated result does not match analytical when k is negative
library(deSolve)
abi <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
dI <- k*exp(k*t)
list(c(dI))
})
}
k <- c(-0.2)
times <- seq(0, 24, by = 1)
I_analytical <- exp(k*times)
parameters <- c(k)
state <- c(I = 0)
out <- ode(y = state, times = times, func = abi, parms = parameters)
plot(out)
points(I_analytical ~ times)
It was pointed out that the initial condition easily resolves the above example, which is very helpful. Here is the equation I can't accurately integrate, I've tried a few different initial conditions without real success.
library(deSolve)
## Chaos in the atmosphere
CYP <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
#dE <- ksyn - (kdeg * E) + (k2 * EI) - (k1 * E * I)
#dEI <- (k1 * E * I) - (k2 * EI) + (k4 * EIstar) - (k3 * EI)
#dEIstar <- (k3 * EI) - (k4 * EIstar)
#dOcc <- dEI + dEIstar
dI <- a*tau1*exp(tau1*t) + b*tau2*exp(tau2*t) + c*tau3*exp(tau3*t)
#list(c(dE, dEI, dEIstar, dOcc, dI))
list(c(dI))
})
}
ifit <- c(-0.956144311,0.82619445,0.024520276,-0.913499862,-0.407478829,-0.037174745)
a = ifit[1]
b = ifit[2]
c = ifit[3]
tau1 = ifit[4]
tau2 = ifit[5]
tau3 = ifit[6]
parameters <- c(ksyn = 0.82, kdeg = 0.02, k1 = 2808, k2 = 370.66, k3 = 2.12, k4 = 0.017, a, b, c, tau1, tau2, tau3)
#state <- c(E = 41, EI = 0, EIstar = 0, Occupancy = 0, I = 0.0)
state <- c(I=-0.01)
times <- seq(0, 24, by = .1)
out <- ode(y = state, times = times, func = CYP, parms = parameters)
I_analytical <- a*exp(tau1*times) + b*exp(tau2*times) + c*exp(tau3*times)
plot(out)
points(I_analytical ~ times)
Target curve and the ode solution line.
The initial value should be
state <- c(I= a + b + c)
#state <- c(I = 1)
The first script contains several issues. The most important two are that (1) the model function (abi) must contain the derivative, not an integrated function, while (2) the analytically integrated model missed I_0 that results from the integration constant.
Let's assume a first order decay model
dI/dt = k I
then analytical integration yields
I_t = I_0 exp(kt)
The code is then:
library(deSolve)
abi <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
# dI <- k*exp(k*t) # original
dI <- k * I # corrected, should be the dervivative
list(c(dI))
})
}
k <- -0.2 # simplified, c() was not necessary
times <- seq(0, 24, by = 1)
# correction: set I0 to a value > zero
I0 <- 10
# I_analytical <- exp(k*times) # original
I_analytical <- I0 * exp(k*times) # corrected, multiplied with I0
#state <- c(I = 0) # original
state <- c(I = I0) # corrected
parameters <- c(k = k)
out <- ode(y = state, times = times, func = abi, parms = parameters)
plot(out)
points(I_analytical ~ times)
This code can be further simplified if you want.

fzero function from MATLAB to R

I want to use R's fzero function to find roots of a function. The problem gets complicated, as the function in question calls some other functions which in turn call another ones. I do have MATLAB code that does it and I am trying to translate it to R, but cannot make in work. My experience with MATLAB is limited, so it's probable I just missed some feature of the MATLAB code while translating. My ultimate goal is to obtain R's working equivalent of the MATLAB code. Any hints will by highly appreciated!
The error I got is in function psi():
Error in (-t(I) * pi^2) %*% time : non-conformable arguments
Although the sizes of matrices do match and this part of code works with some naive input when ran in isolation.
NB: I have tried using mrdivide (R's equivalent of MATLAB's right matrix division) in some places, but with no effect.
NB2: I obtain the same error trying function uniroot instead of fzero.
# Global parameters:
N = 140
A2 = (256 times 256) matrix with data
I = vector of size 256: (0, 1, 2^2, 3^2, 4^2, ..., 255^2)
# ----------------------------------------------------------------
MATLAB working code:
fzero( #(t)(t-evolve(t)),[0,0.1])
function [out,time]=evolve(t)
global N
Sum_func = func([0,2],t) + func([2,0],t) + 2*func([1,1],t);
time=(2*pi*N*Sum_func)^(-1/3);
out=(t-time)/time;
end
function out=func(s,t)
global N
if sum(s)<=4
Sum_func=func([s(1)+1,s(2)],t)+func([s(1),s(2)+1],t); const=
(1+1/2^(sum(s)+1))/3;
time=(-2*const*K(s(1))*K(s(2))/N/Sum_func)^(1/(2+sum(s)));
out=psi(s,time);
else
out=psi(s,t);
end
end
function out=psi(s,Time)
global I A2
% s is a vector
w=exp(-I*pi^2*Time).*[1,.5*ones(1,length(I)-1)];
wx=w.*(I.^s(1));
wy=w.*(I.^s(2));
out=(-1)^sum(s)*(wy*A2*wx')*pi^(2*sum(s));
end
function out=K(s)
out=(-1)^s*prod((1:2:2*s-1))/sqrt(2*pi);
end
# ----------------------------------------------------------------
My attempt at R translation (not working):
fzero(subtract_evolve, c(0, 0.1))
K <- function(s) {
out <- (-1)^s * prod(seq(from = 1,to = 2*s-1, by = 2))/sqrt(2*pi)
return(out)
}
psi <- function(s, time) {
w <- (exp((-t(I) * pi^2) %*% time)) *
t(c(cbind(1, 0.5*ones(1,length(I)-1))))
wx <- t(w * (I^s[1]))
wy <- t(w * (I^s[2]))
out <- (-1)^sum(s) * (wy %*% A2 %*% t(wx)) * pi^(2*sum(s))
return(out)
}
func <- function(s, t) {
if (sum(s) <= 4) {
sum_func <- func(c(s[1]+1,s[2]), t) + func(c(s[1],s[2]+1), t)
const <- (1+1/2^(sum(s)+1))/3
time <- (-2 * const * K(s[1]) * K(s[2]) / N / sum_func)^(1/(2+sum(s)))
out <- psi(s, time)
} else {
out <- psi(s, t)
}
return(out)
}
evolve <- function(t) {
sum_func = func(c(0,2), t) + func(c(2,0), t) + 2*func(c(1,1),t)
time <- (2*pi*N*Sum_func)^(-1/3)
out <- (t-time)/time
return(c(out, time))
}
subtract_evolve <- function(t) {
return(t - evolve(t))
}

ROI optimisation in R using multi-argument F_objective function

Trying to run a simple ROI optimisation in R, but after hours of fidgeting I'm at a loss. I keep getting the error:
Error in .check_function_for_sanity(F, n) :
cannot evaluate function 'F' using 'n' = 5 parameters.
Here is the sample code:
library(ROI)
library(nloptr)
library(ROI.plugin.nloptr)
#Generate some random data for this example
set.seed(3142)
myRet = matrix(runif(100 * 5, -0.1, 0.1), ncol = 5)
myCovMatrix = cov(myRet)
myRet <- myRet
myCovMatrix <- myCovMatrix
# Sample weights
w <- rep(1/ncol(myRet), ncol(myRet))
#Define functions for the optimisation
diversificationRatio = function(w, covMatrix)
{
weightedAvgVol = sum(w * sqrt(diag(covMatrix)))
portfolioVariance = (w %*% covMatrix %*% w)[1,1]
- 1 * weightedAvgVol / sqrt(portfolioVariance)
}
# Check that the F_objective function works:
diversificationRatio(w, myCovMatrix)
# Now construct the F_objective
foo <- F_objective(F = diversificationRatio, n = (ncol(myRet)))
Any ideas on how many parameters to pass to n?
F_objective expects a function with only one argument so you have to write a wrapper function.
#Define functions for the optimisation
diversificationRatio <- function(w, covMatrix) {
weightedAvgVol <- sum(w * sqrt(diag(covMatrix)))
portfolioVariance <- (w %*% covMatrix %*% w)[1,1]
- 1 * weightedAvgVol / sqrt(portfolioVariance)
}
# Check that the F_objective function works:
wrapper <- function(x) diversificationRatio(x, myCovMatrix)
# Now construct the F_objective
o <- OP(F_objective(F = wrapper, n = (ncol(myRet))))
ROI_applicable_solvers(o)
start <- runif(ncol(myRet))
s <- ROI_solve(o, solver = "nloptr", start = start, method = "NLOPT_LD_SLSQP")
s
solution(s)

Markowitz model / portfolio optimization using local search in R

I am taking baby steps to use metaheuristics for solving constrained optimization problems. I am trying to solve basic Markowitz Mean-Variance optimization model (given below) using NMOFpackage in R.
Min
lambda * [sum{i=1 to N}sum{j = 1 to N}w_i*w_i*Sigma_ij] - (1-lambda) * [sum{i=1 to N}(w_i*mu_i)]
subject to
sum{i=1 to N}{w_i} = 1
0 <= w_i <= 1; i = 1,...,N
where, lambda takes values between 0 and 1, N is number of assets.
Following is my code (Based on Book: Numerical Methods and Optimization in Finance):
library(NMOF)
na <- dim(fundData)[2L]
ns <- dim(fundData)[1L]
Sigma <- cov(fundData)
winf <- 0.0
wsup <- 1.0
m <- colMeans(fundData)
resample <- function(x,...) x[sample.int(length(x),...)]
data <- list(R = t(fundData),
m = m,
na = dim(fundData)[2L],
ns = dim(fundData)[1L],
Sigma = Sigma,
eps = 0.5/100,
winf = winf,
wsup = wsup,
nFP = 100)
w0 <- runif(data$na); w0 <- w0/sum(w0)
OF <- function(w,data){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- res - wmu
}
neighbour <- function(w, data){
toSell <- w > data$winf
toBuy <- w < data$wsup
i <- resample(which(toSell), size = 1L)
j <- resample(which(toBuy), size = 1L)
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
algo <- list(x0 = w0, neighbour = neighbour, nS = 5000L)
system.time(sol1 <- LSopt(OF, algo, data))
I am not sure how to include lambda in the objective function (OF). The above code does not include lambda in OF. I tried using for loop but it resulted in following error:
OF <- function(w,data){
lambdaSeq <- seq(.001,0.999, length = data$nFP)
for(lambda in lambdaSeq){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- lambda*res - (1-lambda)*wmu
}
}
Error:
Local Search.
Initial solution:
| | 0%
Error in if (xnF <= xcF) { : argument is of length zero
Timing stopped at: 0.01 0 0.03
It would be nice if someone could help me in this regard.
P.S: I am also aware that this can be solved using quadratic programming. This is just an initiation to include other constraints.
If I understand correctly, you want to replicate the mean--variance efficient frontier by Local Search? Then you need to run a Local Search for every value of lambda that you want to include in the frontier.
The following example should help you get going. I start by attaching the package and setting up the list data.
require("NMOF")
data <- list(m = colMeans(fundData), ## expected returns
Sigma = cov(fundData), ## expected var of returns
na = dim(fundData)[2L], ## number of assets
eps = 0.2/100, ## stepsize for LS
winf = 0, ## minimum weight
wsup = 1, ## maximum weight
lambda = 1)
Next I compute a benchmark for the minimum-variance case (i.e. lambda equals one).
## benchmark: the QP solution
## ==> this will only work with a recent version of NMOF,
## which you can get by saying:
## install.packages('NMOF', type = 'source',
## repos = c('http://enricoschumann.net/R',
## getOption('repos')))
##
require("quadprog")
sol <- NMOF:::minvar(data$Sigma, 0, 1)
Objective function and neighbourhood function. I have slightly simplified both functions (for clarity; using crossprod in the objective function would probably be more efficient).
OF <- function(w, data){
data$lambda * (w %*% data$Sigma %*% w) -
(1 - data$lambda) * sum(w * data$m)
}
neighbour <- function(w, data){
toSell <- which(w > data$winf)
toBuy <- which(w < data$wsup)
i <- toSell[sample.int(length(toSell), size = 1L)]
j <- toBuy[sample.int(length(toBuy), size = 1L)]
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
Now we can run Local Search. Since it is a fairly large dataset (200 assets),
you will need a relatively large number of steps to reproduce the QP solution.
w0 <- runif(data$na) ## a random initial solution
w0 <- w0/sum(w0)
algo <- list(x0 = w0, neighbour = neighbour, nS = 50000L)
sol1 <- LSopt(OF, algo, data)
You can compare the weights you get from Local Search with the QP solution.
par(mfrow = c(3,1), mar = c(2,4,1,1), las = 1)
barplot(sol, main = "QP solution")
barplot(sol1$xbest, main = "LS solution")
barplot(sol - sol1$xbest,
ylim = c(-0.001,0.001)) ## +/-0.1%
Finally, if you want to compute the whole frontier, you need to rerun this code for different levels of data$lambda.

Resources