Using NLOPT/Gurobi for solving mixed constraint optimization - r

I am currently working on a project, and I want to use R and NLOPT package (or Gurobi) to solve the following optimization problem:
Find min ||y-y_h||_L^2 such that x = Ay_h, y >= 0, where x, y are given vector of size 16*1, A = 16*24 matrix is also given.
My attempt:
R code
nrow=16;
ncol = 24;
lambda = matrix(sample.int(100, size = ncol*nrow, replace = T),nrow,ncol);
lambda = lambda - diag(lambda)*diag(x=1, nrow, ncol);
y = rpois(ncol,lambda) + rtruncnorm(ncol,0,1,mean = 0, sd = 1);
x = matrix (0, nrow, 1);
x_A1 = y[1]+y[2]+y[3];
x_A2 = y[4]+y[7]+y[3];
x_B1 = y[4]+y[5]+y[6];
x_B2 = y[11]+y[1];
x_C1 = y[7]+y[8]+y[9];
x_C2 = y[2]+y[5]+y[12];
x_D1 = y[10]+y[11]+y[12];
x_D2 = y[3]+y[6]+y[9];
x_E1 = y[13]+y[14]+y[15];
x_E2 = y[18]+y[19]+y[23];
x_F1 = y[20]+y[21]+y[19];
x_F2 = y[22]+y[16]+y[13];
x_G1 = y[23]+y[22]+y[24];
x_G2 = y[14]+y[17]+y[20];
x_H1 = y[16]+y[17]+y[18];
x_H2 = y[15]+y[21]+y[24];
d <- c(x_A1, x_A2,x_B1, x_B2,x_C1, x_C2,x_D1, x_D2,x_E1,
x_E2,x_F1, x_F2,x_G1, x_G2,x_H1, x_H2)
x <- matrix(d, nrow, byrow=TRUE)
A = matrix(c(1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_A^1
0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_A^2
0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_B^1
1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_B^2
0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_C^1
0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, #x_C^2
0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0, #x_D^1
0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, #x_D^2
0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0, #x_E^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0, #x_E^2
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0, #x_F^1
0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0,0, #x_F^2
0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0, #x_G^2
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1, #x_G^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0, #x_H^1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1), #x_H^2
nrow, ncol, byrow= TRUE)
Tried two codes to solve the problem: min ||y - y_h||_L^2 where x= Ay_h, y>=0 where x,y,A are all given above.
# f(x) = ||yhat-y||_L2
eval_f <- function( yhat ) {
return( list( "objective" = norm((mean(yhat-y))^2, type = "2")))
}
# inequality constraint
eval_g_ineq <- function( yhat ) {
constr <- c(0 - yhat)
return( list( "constraints"=constr ))
}
# equalities constraint
eval_g_eq <- function( yhat ) {
constr <- c( x-A%*%yhat )
return( list( "constraints"=constr ))
}
x0 <- y
#lower bound of control variable
lb <- c(matrix (0, ncol, 1))
local_opts <- list( "algorithm" = "NLOPT_LD_MMA",
"xtol_rel" = 1.0e-7 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
"xtol_rel" = 1.0e-7,
"maxeval" = 1000,
"local_opts" = local_opts )
res <- nloptr( x0=x0,
eval_f=eval_f,
eval_grad_f = NULL,
lb=lb,
eval_g_ineq = eval_g_ineq,
eval_g_eq=eval_g_eq,
opts=opts)
print(res)
Gurobi code:
**#model <- list()
#model$B <- A
#model$obj <- norm((y-yhat)^2, type = "2")
#model$modelsense <- "min"
#model$rhs <- c(x,0)
#model$sense <- c('=', '>=')
#model$vtype <- 'C'
#result <- gurobi(model, params)
#print('Solution:')
#print(result$objval)
#print(result$yhat)**
My question: First, when I ran the R code above, it kept giving me this message:
Error in is.nloptr(ret) :
wrong number of elements in gradient of objective
In addition: Warning message:
In is.na(f0$gradient) :
is.na() applied to non-(list or vector) of type 'NULL'
I tried to avoid computing gradient, as I do not have any information on the density function of y. Could anyone please help me fix the error above?
For the Gurobi code, I got this message: Error: is(model$A, "matrix") || is(model$A, "sparseMatrix") || is(model$A, .... is not TRUE
But my matrix A is correctly inputted, so what does this error mean?

I start to use nloptr only several days ago. This question is already an old one but I will still answer it. when you are using 'nloptr' with 'NLOPT_LD_AUGLAG' algorithm, the 'LD' stands for local and using gradient... So you need to choose something else with 'LN' in the middle. For ex., 'NLOPT_LN_COBYLA' should work fine without gradient.
Actually you can just look up the nloptr package manual.

Related

Problem with multi-objective optimization constraints: R

I have the following code that defines two constraints I want to use in my multi-objective optimization problem, given that model1 model2 and model3 are already verifiably working before.
restrictions <- function (var) {
x <- var[1]; y <- var[2]
restrictions <- logical(2)
restrictions[1] <- (predict(get(model1), data.frame(x, y), type = "response") < 500)
restrictions[2] <- (predict(get(model1), data.frame(x, y), type = "response") > 0)
return (restrictions);
}
Building a genetic algorithm multi objective function in the following code:
fn <- function (var) {
x <- var[1]; y <- var[2]
f <- numeric(3)
f[1] <- predict(get(model1), data.frame(x, y), type = "response")
f[2] <- predict(get(model2), data.frame(x, y), type = "response")
f[3] <- predict(get(model3), data.frame(x, y), type = "response")
return (f);
}
And finally the optimization process here using mco library
library (mco)
optimum <- mco::nsga2 (fn = fn, idim = 2, odim=3,
constraints = restrictions, cdim = 2,
generations = 100,
popsize= 40,
cprob = 0.5,
cdist = 20,
mprob = 0.5,
mdist = 20,
lower.bounds = c(-80, 50),
upper.bounds = c(-70, 60)
)
The main problem is that the solution does not abide with the constraint specified. Any thoughts on that?

Creating a giant matrix for use in CARBayes without running out of memory

I am trying to do an areal unit analysis using the package CARBayes. As part of the analysis, I am using the below code. my issue comes when I try to create the neighbour matrix with nb2mat. My sp object has 170,000 odd polygons in it so it can't make the matrix with the memory I have.
library(spdep)
library(CARBayes)
W.nb <- poly2nb(sp)
W <- nb2mat(W.nb, style = "B", zero.policy = TRUE)
test <- S.CARbym(case ~ covariate1),
family = "poisson",
data = sp,
W = W,
burnin = 10000,
n.sample = 30000,
thin = 20)
I found the below code in another thread to make a bigmemory matrix but CARBayes won't recognise it as a matrix.
My question is, does anyone know a way to use bigmemory or spam /sparse matrix or something similar to create the matrix so that it can be used in the CARBayes package without throwing an error saying the W isn't a matrix.
my_listw2mat = function (listw)
{
require(bigmemory)
n <- length(listw$neighbours)
if (n < 1)
stop("non-positive number of entities")
cardnb <- card(listw$neighbours)
if (any(is.na(unlist(listw$weights))))
stop("NAs in general weights list")
#res <- matrix(0, nrow = n, ncol = n)
res <- big.matrix(n, n, type='double', init=NULL)
options(bigmemory.allow.dimnames=TRUE)
for (i in 1:n) if (cardnb[i] > 0)
res[i, listw$neighbours[[i]]] <- listw$weights[[i]]
if (!is.null(attr(listw, "region.id")))
row.names(res) <- attr(listw, "region.id")
res
}
my_nb2mat = function (neighbours, glist = NULL, style = "W", zero.policy = NULL)
{
if (is.null(zero.policy))
zero.policy <- get("zeroPolicy", envir = .spdepOptions)
stopifnot(is.logical(zero.policy))
if (!inherits(neighbours, "nb"))
stop("Not a neighbours list")
listw <- nb2listw(neighbours, glist = glist, style = style,
zero.policy = zero.policy)
res <- my_listw2mat(listw)
attr(res, "call") <- match.call()
res
}
W <- my_nb2mat(W.nb, style = "B", zero.policy = TRUE)
test <- S.CARbym(case ~ covariate1),
family = "poisson",
data = sp,
W = W,
burnin = 10000,
n.sample = 30000,
thin = 20)

Scatterplot matrix - Error: Viewport 'plot_01.panel.1.1.off.vp' was not found"

When I want to create a scatterplot matrix, there is a error about
Error in grid.Call.graphics(C_downviewport, name$name, strict) :
Viewport 'plot_01.panel.1.1.off.vp' was not found".
How I can fix it?
varNum <- function(x){
val <- 1:ncol(x)
names(val) <- colnames(x)
return(val)
}
varNum(house)
Bedroom SquareFeet Followers VisitingTime TotalPrice UnitPrice
1 2 3 4 5 6
District Location
7 8
house1 <- house[,c(7,1:6)]
offDiag <- function(x,y,...){
panel.grid(h = -1,v = -1,...)
panel.hexbinplot(x,y,xbins = 15,...,border = gray(.7),
trans = function(x)x^.5)
# panel.loess(x , y, ..., lwd=2,col='red')
}
onDiag <- function(x, ...){
yrng <- current.panel.limits()$ylim
d <- density(x, na.rm = TRUE)
d$y <- with(d, yrng[1] + 0.95 * diff(yrng) * y / max(y) )
panel.lines(d,col = rgb(.83,.66,1),lwd = 2)
diag.panel.splom(x, ...)
}
splom(house1,as.matrix = TRUE,
xlab = '',main = "Beijing Housing Variables",
pscale = 0, varname.cex = 0.8,axis.text.cex = 0.6,
axis.text.col = "purple",axis.text.font = 2,
axis.line.tck = .5,
panel = offDiag,
diag.panel = onDiag
)
Error in grid.Call.graphics(C_downviewport, name$name, strict) :
Viewport 'plot_01.panel.1.1.off.vp' was not found
Try installing the ellipse package. You do not need to load it as a library, only install it.
install.packages("ellipse")
I had the same issue. In my case this was caused by specifying the splom argument col = mydataframe$somevariable which was a categorical variable of strings. Specifying it as col = as.numeric(as.factor(mydataframe$somevariable)) fixed the issue. To anyone faced with this error message in the future: try removing optional arguments to identfy what might be wrong.

Training mxnet:mx.mlp

I am trying to reproduce an example from ND Lewis: Neural Networks for time series forecasting with R. If I include the device argument I get the error:
Error in mx.opt.sgd(...) :
unused argument (device = list(device = "cpu", device_id = 0, device_typeid = 1))
In addition: Warning message:
In mx.model.select.layout.train(X, y) :
Auto detect layout of input matrix, use rowmajor..
If I remove this parameter, I still get this warning:
Warning message:
In mx.model.select.layout.train(X, y) :
Auto detect layout of input matrix, use rowmajor..
The code is:
library(zoo)
library(quantmod)
library(mxnet)
# data
data("ecoli", package = "tscount")
data <- ecoli$cases
data <- as.zoo(ts(data, start = c(2001, 1), end = c(2013, 20), frequency = 52))
xorig <- do.call(cbind, lapply((1:4), function(x) as.zoo(Lag(data, k = x))))
xorig <- cbind(xorig, data)
xorig <- xorig[-(1:4), ]
# normalization
range_data <- function(x) {
(x - min(x))/(max(x) - min(x))
}
xnorm <- data.matrix(xorig)
xnorm <- range_data(xnorm)
# test/train
y <- xnorm[, 5]
x <- xnorm[, -5]
n_train <- 600
x_train <- x[(1:n_train), ]
y_train <- y[(1:n_train)]
x_test <- x[-(1:n_train), ]
y_test <- y[-(1:n_train)]
# mxnet:
mx.set.seed(2018)
model1 <- mx.mlp(x_train,
y_train,
hidden_node = c(10, 2),
out_node = 1,
activation = "sigmoid",
out_activation = "rmse",
num.round = 100,
array.batch.size = 20,
learning.rate = 0.07,
momentum = 0.9
#, device = mx.cpu()
)
pred1_train <- predict(model1, x_train, ctx = mx.cpu())
How can I fix this?
Regarding the second warning message, MXNet is trying to detect the row/column major based on the shape of your inputs: https://github.com/apache/incubator-mxnet/blob/424143ac47ab3a38ae8aedaeb3319379887de0bc/R-package/R/model.R#L329
For the unused argument device = mx.cpu(), should the argument name be corrected to ctx instead of device?

Minimization with R nloptr package - multiple equality constraints

Is it possible to specify more than one equality constraint in nloptr function in R? The code that I am trying to run is the following:
eval_f <- function( x ) {
return( list( "objective" = x[3]^2+x[4]^2,
"gradient" = c( 0,
0,
2*x[3],
2*x[4] ) ) )
}
# constraint functions
# equalities
eval_g_eq <- function( x ) {
constr <- c( x[1] + x[2] + x[3] - 4,
x[1]^2 + x[2]^2 + x[4] - 15
)
grad <- c( c(1, 1, 1, 0),
c(2*x[1], 2*x[2], 0, 1)
)
return( list( "constraints"=constr, "jacobian"=grad ) )
}
# initial values
x0 <- c( 1, 5, 5, 1 )
local_opts <- list( "algorithm" = "NLOPT_LD_MMA",
"xtol_rel" = 1.0e-7 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
"xtol_rel" = 1.0e-7,
"maxeval" = 1000,
"local_opts" = local_opts )
res <- nloptr( x0=x0,
eval_f=eval_f,
eval_g_eq=eval_g_eq,
opts=opts)
print( res )
The result it produce is the following:
Current value of controls: -1.035323 3.093593 2.409501 0.2708714
However these values do not hold equality constraints, i.e.
-1.035323 + 3.093593 + 2.409501 = 4.467771
(-1.035323)^2 + 3.093593^2 + 0.2708714 = 10.91308
I guess that either it is impossible to specify multiple equality constraints in nloptr function or I passed them in the wrong way.
I did not find any example having more than one equality constraint in package documentation.
UPDATE
Ok, I solved it. The case was that specifying constr and grad in eval_g_eq, one should use rbind() instead of c().
I answered this in a different post recently for inequality constraints, but you should be able to return multiple equality constraints in a vector as well using c()
"multiple inequality constraints" - Minimization with R nloptr package

Resources