I am trying to optimize a function in R using the nloptr package.
Here is the code:
library('nloptr')
hn <- function(x, n)
{
hret <- 0
if (n == 0)
{
hret <- 1
return (hret)
}
else if (n == 1)
{
hret <- 2*x
return (hret)
}
else
{
hn2 <- 1
hn1 <- 2*x
all_n <- seq(from = 2, to = n, by = 1)
for (ni in all_n)
{
hn = (2*x*hn1/sqrt(ni)) + (2*sqrt( (ni-1)/ni)*hn2)
#print(hn)
hn2 = hn1
hn1 = hn
}
hret <- hn
return (hret)
}
}
term <- function(alpha, r, theta, n)
{
beta = alpha*cosh(r) - Conj(alpha)*exp(1i*theta)*(sinh(r))
hnterm <- beta/(sqrt(exp(1i*theta)*sinh(2*r)))
term4 <- hn(hnterm, n)
logterm1 <- (1/2)*log(cosh(r))
logterm2 <- -((1/2)*(abs(alpha)^2)) + ((1/2)* (Conj(alpha)^2))*exp(1i*theta)*tanh(r)
logterm3 <- (n/2)*( log (((1/2)*exp(1i*theta)*tanh(r)) ))
logterm4 <- log ( term4)
logA <- logterm1 + logterm2 + logterm3 + logterm4
A <- exp(logA)
retval <- c(A)
return (A)
}
PESQ <- function(x, alpha)
{
p0 <- x[1]
p1 <- x[2]
beta <- x[3]
r <- x[4]
theta <- x[5]
N <- 30
NI <- seq(from = 0, to = N, by = 1)
elements <- rep(0+1i*0, length(NI))
elements_abs_sqr <- rep(0, length(NI))
pr <- rep(0, length(NI))
total <- 0 + 1i*0
for (n in NI)
{
w <-term(2*alpha + beta, r, theta, n)
elements[n+1] <- w
elements_abs_sqr[n+1] <-(abs(w)^2)
}
total <- sum(elements_abs_sqr)
for (n in NI)
{
pr[n+1] <- Re(elements[n+1]/sqrt(total))
pr[n+1] <- pr[n+1]^2
}
p_off_given_on <- pr[1]
elements <- rep(0+1i*0, length(NI))
elements_abs_sqr <- rep(0, length(NI))
pr <- rep(0, length(NI))
total <- 0 + 1i*0
for (n in NI)
{
w <-term(beta, r, theta, n)
elements[n+1] <- w
elements_abs_sqr[n+1] <-(abs(w)^2)
}
total <- sum(elements_abs_sqr)
for (n in NI)
{
pr[n+1] <- Re(elements[n+1]/sqrt(total))
pr[n+1] <- pr[n+1]^2
}
p_on_given_off = 1 - pr[1]
P_e = p0*p_off_given_on + p1*p_on_given_off
return(P_e)
}
eval_g_eq <- function(x)
{
return ( x[1] + x[2] - 1)
}
lb <- c(0, 0, -Inf, 0.001, -pi)
ub <- c(1, 1, Inf, Inf, pi)
local_opts <- list("algorithm" = "NLOPT_LD_MMA",
"xtol_rel"=1.0e-18)
# Set optimization options.
opts <- list("algorithm" = "NLOPT_LN_AUGLAG",
"xtol_rel" = 1.0e-18, "local_opts" = local_opts, "maxeval" = 10000)
x0 <- c(0.1,0.9, 0.1, 0.01, 0.7853982)
alpha <- 0.65
eval_g_ineq <- function(x)
{
return (c (- x[1] - x[2],
x[1] + x[2] - 1)
)
}
eval_f <- function(x)
{
ret = PESQ(x, alpha)
return(ret)
}
res <- nloptr ( x0 = x0,
eval_f = eval_f,
eval_g_eq = eval_g_eq,
eval_g_ineq = eval_g_ineq,
lb = lb,
ub = ub,
opts = opts )
print(res)
Upon running this code, I get the following error:
Error in nloptr(x0 = x0, eval_f = eval_f, eval_g_ineq = eval_g_ineq, eval_g_eq = eval_g_eq, :
STRING_ELT() can only be applied to a 'character vector', not a 'NULL'
Calls: ... withCallingHandlers -> withVisible -> eval -> eval -> nloptr
Execution halted
The weird thing, if I use "algorithm"="NLOPT_LN_COBYLA" in opts and I remove the equality constraint eval_g_eq in nloptr call, it runs fine and I get a solution. However, I need equality constraints for my work.
How should I fix the issue?
This is still a bit of a guess, but: the only possibility I can come up with is that using a derivative-based optimizer for your local optimizer at the same time as you use a derivative-free optimizer for the global solution (i.e., the NLopt docs clarify that LN in NLOPT_LN_AUGLAG denotes "local, derivative-free" whereas _LD_ would denote "local, derivative-based") is causing the problem? I got an answer (not sure if it's correct though!) by using "NLOPT_LN_COBYLA" as the algorithm in local_opts: with everything else as in your code,
local_opts <- list("algorithm" = "NLOPT_LN_COBYLA",
"xtol_rel"=1.0e-18)
# Set optimization options.
opts <- list("algorithm" = "NLOPT_LN_AUGLAG",
"xtol_rel" = 1.0e-18, "local_opts" = local_opts, "maxeval" = 10000)
print(res <- nloptr ( x0 = x0,
eval_f = eval_f,
eval_g_eq = eval_g_eq,
eval_g_ineq = eval_g_ineq,
lb = lb,
ub = ub,
opts = opts ))
Returns
Call:
nloptr(x0 = x0, eval_f = eval_f, lb = lb, ub = ub, eval_g_ineq = eval_g_ineq,
eval_g_eq = eval_g_eq, opts = opts)
Minimization using NLopt version 2.4.2
NLopt solver status: 3 ( NLOPT_FTOL_REACHED: Optimization stopped because
ftol_rel or ftol_abs (above) was reached. )
Number of Iterations....: 102
Termination conditions: xtol_rel: 1e-18 maxeval: 10000
Number of inequality constraints: 2
Number of equality constraints: 1
Optimal value of objective function: 2.13836819774604e-05
Optimal value of controls: 0 1 -0.0003556752 0.006520304 2.037835
As far as I can see this has done a plausible solution respecting the constraints:
the reason for stopping ("ftol_rel or ftol_abs ... was reached") is sensible
it used a reasonable number (102) of iterations to get there (and not maxeval)
eval_g_eq(res$solution) does give 0 (which we can also see by inspection, as the condition is x[1]+x[2]-1==0).
The inequality conditions are -x1-x2 and x1+x2-1; I'm not sure how the sign of these inequalities is defined/determined? The same as x0, i.e. assuming the initial conditions are feasible? (If x1+x2 is constrained to equal 1, I'm not sure why the inequality constraints here can ever do anything?)
eval_f(x0) is considerably larger than eval_f(res$solution) ...
I was using the NLoptr package for solving an optimization problem of a 9 variables cost function using the program as:
function(x){return( list( "objective" = 0.0404*x[1]^2 + 4.4823*x[1] + 0.4762+0.024*x[2]^2 + 3.9767*x[2] + 0.3737+0.0246*x[3]^2 + 3.6992*x[3] + 0.9425+0.0214*x[4]^2 + 3.5896*x[4] + 0.7615+0.0266*x[5]^2 + 3.8197*x[5] + 0.2799+0.0262*x[6]^2 + 3.7884*x[6] + 0.307+0.0362*x[7]^2 + 4.4927*x[7] + 0.1549+0.0344*x[8]^2 + 4.4066*x[8] - 0.2472+0.0241*x[9]^2 + 4.227*x[9],"gradient" = c(2*0.0404*x[1]+4.4823, 2*0.024*x[2]+3.9767, 2*0.0246*x[3], 2*0.0214*x[4]+3.5896, 2*0.0266*x[5]+3.8197,2*0.0262*x[6]+3.7884,2*0.0362*x[7]+4.4927, 2*0.0344*x[8]+4.4066, 2*0.0241*x[9]+4.227)))}
function( x ) {
constr <- c(x[1] + x[2]+ x[3] + x[4]+x[5]+x[6]+x[7]+x[8]+x[9]-Balance)
grad <- c(1,1,1,1,1,1,1,1,1)
return( list( "constraints"=constr, "jacobian"=grad ) )
}
lb<-c(50,50,50,50,50,50,50,50,50)
ub<-c(0,0,0,0,0,0,0,0)
x_0<-c(25,25,25,25,25,25,25,25,25)
local_opts <- list( "algorithm" = "NLOPT_LD_MMA","xtol_rel" = 1.0e-9 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG","xtol_rel" = 1.0e-9,"maxeval" = 10000, "local_opts" = local_opts )
res <- nloptr(x0=x_0, eval_f=eval_f,lb=lb,ub=ub,eval_g_eq=eval_g_eq,opts=opts)
The code works fine but the problem is that I need to solve this optimization for a period of 168h and each time step the lower bounds and upper bounds have to be different. Has anyone implemented this before?
BR
I highly suggest you to use OSQP for that. You can download it from CRAN. You can find an example for updating the problem vectors in the manual. I have rewritten it here:
library(Matrix)
# Define problem data in the form
# minimize (1/2) x' P x + q' x
# subject to l <= A x <= u
#
P <- Matrix(c(11., 0., 0., 0.), 2, 2, sparse = TRUE)
q <- c(3., 4.)
A <- Matrix(c(-1., 0., -1., 2., 3., 0., -1., -3., 5., 4.), 5, 2, sparse = TRUE)
u <- c(0., 0., -15., 100., 80)
l <- rep_len(-Inf, 5)
settings <- osqpSettings(verbose = FALSE)
model <- osqp(P, q, A, l, u, settings)
# Solve
res <- model$Solve()
# Get solution
x_opt <- res$x
# Define new vector
q_new <- c(10., 20.)
# Update model and solve again
model$Update(q = q_new)
res <- model$Solve()
# Get new solution
x_opt_new <- res$x
Disclamer: I am one of the OSQP authors.
I am currently working on a project, and I want to use R and NLOPT package (or Gurobi) to solve the following optimization problem:
Find min ||y-y_h||_L^2 such that x = Ay_h, y >= 0, where x, y are given vector of size 16*1, A = 16*24 matrix is also given.
My attempt:
R code
nrow=16;
ncol = 24;
lambda = matrix(sample.int(100, size = ncol*nrow, replace = T),nrow,ncol);
lambda = lambda - diag(lambda)*diag(x=1, nrow, ncol);
y = rpois(ncol,lambda) + rtruncnorm(ncol,0,1,mean = 0, sd = 1);
x = matrix (0, nrow, 1);
x_A1 = y[1]+y[2]+y[3];
x_A2 = y[4]+y[7]+y[3];
x_B1 = y[4]+y[5]+y[6];
x_B2 = y[11]+y[1];
x_C1 = y[7]+y[8]+y[9];
x_C2 = y[2]+y[5]+y[12];
x_D1 = y[10]+y[11]+y[12];
x_D2 = y[3]+y[6]+y[9];
x_E1 = y[13]+y[14]+y[15];
x_E2 = y[18]+y[19]+y[23];
x_F1 = y[20]+y[21]+y[19];
x_F2 = y[22]+y[16]+y[13];
x_G1 = y[23]+y[22]+y[24];
x_G2 = y[14]+y[17]+y[20];
x_H1 = y[16]+y[17]+y[18];
x_H2 = y[15]+y[21]+y[24];
d <- c(x_A1, x_A2,x_B1, x_B2,x_C1, x_C2,x_D1, x_D2,x_E1,
x_E2,x_F1, x_F2,x_G1, x_G2,x_H1, x_H2)
x <- matrix(d, nrow, byrow=TRUE)
A = matrix(c(1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_A^1
0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_A^2
0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_B^1
1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_B^2
0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_C^1
0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, #x_C^2
0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0, #x_D^1
0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_D^2
0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0, #x_E^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0, #x_E^2
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0, #x_F^1
0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0,0, #x_F^2
0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0, #x_G^2
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1, #x_G^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0, #x_H^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1), #x_H^2
nrow, ncol, byrow= TRUE)
Tried two codes to solve the problem: min ||y - y_h||_L^2 where x= Ay_h, y>=0 where x,y,A are all given above.
# f(x) = ||yhat-y||_L2
eval_f <- function( yhat ) {
return( list( "objective" = norm((mean(yhat-y))^2, type = "2")))
}
# inequality constraint
eval_g_ineq <- function( yhat ) {
constr <- c(0 - yhat)
return( list( "constraints"=constr ))
}
# equalities constraint
eval_g_eq <- function( yhat ) {
constr <- c( x-A%*%yhat )
return( list( "constraints"=constr ))
}
x0 <- y
#lower bound of control variable
lb <- c(matrix (0, ncol, 1))
local_opts <- list( "algorithm" = "NLOPT_LD_MMA",
"xtol_rel" = 1.0e-7 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
"xtol_rel" = 1.0e-7,
"maxeval" = 1000,
"local_opts" = local_opts )
res <- nloptr( x0=x0,
eval_f=eval_f,
eval_grad_f = NULL,
lb=lb,
eval_g_ineq = eval_g_ineq,
eval_g_eq=eval_g_eq,
opts=opts)
print(res)
Gurobi code:
**#model <- list()
#model$B <- A
#model$obj <- norm((y-yhat)^2, type = "2")
#model$modelsense <- "min"
#model$rhs <- c(x,0)
#model$sense <- c('=', '>=')
#model$vtype <- 'C'
#result <- gurobi(model, params)
#print('Solution:')
#print(result$objval)
#print(result$yhat)**
My question: First, when I ran the R code above, it kept giving me this message:
Error in is.nloptr(ret) :
wrong number of elements in gradient of objective
In addition: Warning message:
In is.na(f0$gradient) :
is.na() applied to non-(list or vector) of type 'NULL'
I tried to avoid computing gradient, as I do not have any information on the density function of y. Could anyone please help me fix the error above?
For the Gurobi code, I got this message: Error: is(model$A, "matrix") || is(model$A, "sparseMatrix") || is(model$A, .... is not TRUE
But my matrix A is correctly inputted, so what does this error mean?
I start to use nloptr only several days ago. This question is already an old one but I will still answer it. when you are using 'nloptr' with 'NLOPT_LD_AUGLAG' algorithm, the 'LD' stands for local and using gradient... So you need to choose something else with 'LN' in the middle. For ex., 'NLOPT_LN_COBYLA' should work fine without gradient.
Actually you can just look up the nloptr package manual.