How to write a JuMP constraint into a text file? - julia

I have searched but didn't find an answer. I have read a MPS file into a JuMP model. Now I would like to write a subset of constraints into a text file for further analysis. I know how to write a JuMP model into a LP or MPS file. But here I just want to write a subset of constraints into a text file. If I can parse the constraints, figuring out what are the coefficients and what are the variables, that would even be better. Thanks in advance!

println(file_stream, m)
println will nicely format a JuMP model and the output will be a text file.
Full code:
using JuMP
using GLPK
m = Model(optimizer_with_attributes(GLPK.Optimizer))
#variable(m, x1 >= 0)
#variable(m, x2 >= 0)
#constraint(m, x1 + x2 <= 10)
#objective(m, Max, 2x1 + x2)
open("model.txt", "w") do f
println(f, m)
end
Let's see what is in the file:
$ more model.txt
Max 2 x1 + x2
Subject to
x1 + x2 <= 10.0
x1 >= 0.0
x2 >= 0.0
If you want constraints only this code will do:
open("cons.txt","w") do f
for c in vcat([all_constraints(m, t...) for t in list_of_constraint_types(m)]...)
println(f , c)
end
end

Related

understanding application of function: x > y - z & x < y + z

I am not particularly great with math and am trying to understand some R code. There is a function called "compareforequality" that looks like this:
compareforequality <- function(val1, val2, epsilon)
{
val1 = as.numeric(val1);
val2 = as.numeric(val2);
equal = val1 > (val2 - epsilon) & val1 < val2 + epsilon;
equal
}
where val1 and val2 are vector of numbers that signify timepoints (usually integers between -10 and 1000 that identify days in a time series), and epsilon is set to 1e-10. I can see that it will return true/false if the values are the same/different, but what is the application of a function like this instead of using something like identical(). What effect does the value of epsilon have on the comparison?
Thanks,
The point is not for them to be exactly equal, it's to compare for rough equality, as in "val1 is within epsilon of val2".
The classic example of the usefulness of something like this is probably floating point numbers, where (for instance) 0.1 + 0.2 != 0.3, but 0.1 + 0.2 is within epsilon of 0.3 for some small epsilon, which is quite often enough.

Functions for multivariate numerical integration in R [duplicate]

I am using the following R code, taken from a published paper (citation below). This is the code:
int2=function(x,r,n,p) {
(1+x)^((n-1-p)/2)*(1+(1-r^2)*x)^(-(n-1)/2)*x^(-3/2)*exp(-n/(2*x))}
integrate(f=int2,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
When I run it, I get the error "non-finite function value". Yet Maple is able to compute this as 4.046018765*10^27.
I tried using "integral" in package pracma, which gives me a different error:
Error in if (delta < tol) break : missing value where TRUE/FALSE needed
The overall goal is to compute a ratio of two integrals, as described in Wetzels & Wagenmakers (2012) "A default Bayesian hypothesis test for correlations" (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505519/). The entire function is as follows:
jzs.pcorbf = function(r0, r1, p0, p1, n) {
int = function(r,n,p,g) {
(1+g)^((n-1-p)/2)*(1+(1-r^2)*g)^(-(n-1)/2)*g^(-3/2)*exp(-n/(2*g))};
bf10=integrate(int, lower=0,upper=Inf,r=r1,p=p1,n=n)$value/
integrate(int,lower=0,upper=Inf,r=r0,p=p0,n=n)$value;
return(bf10)
}
Thanks!
The issue is that your integral function is generating NaN values when called with x values in its domain. You're integrating from 0 to Infinity, so let's check a valid x value of 1000:
int2(1000, sqrt(0.245), 530, 3)
# [1] NaN
Your objective multiplies four pieces:
x <- 1000
r <- sqrt(0.245)
n <- 530
p <- 3
(1+x)^((n-1-p)/2)
# [1] Inf
(1+(1-r^2)*x)^(-(n-1)/2)
# [1] 0
x^(-3/2)
# [1] 3.162278e-05
exp(-n/(2*x))
# [1] 0.7672059
We can now see that the issue is that you're multiplying infinity by 0 (or rather something numerically equal to infinity times something numerically equal to 0), which is causing the numerical issues. Instead of calculating a*b*c*d, it will be more stable to calculate exp(log(a) + log(b) + log(c) + log(d)) (using the identity that log(a*b*c*d) = log(a)+log(b)+log(c)+log(d)). One other quick note -- the value x=0 needs a special case.
int3 = function(x, r, n, p) {
loga <- ((n-1-p)/2) * log(1+x)
logb <- (-(n-1)/2) * log(1+(1-r^2)*x)
logc <- -3/2 * log(x)
logd <- -n/(2*x)
return(ifelse(x == 0, 0, exp(loga + logb + logc + logd)))
}
integrate(f=int3,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
# 1.553185e+27 with absolute error < 2.6e+18

Solving a mixed system of equality and inequality

Intro: I sucessfully use the rSymPy library to symbolically solve following example system of equalities.
x + y = 20; x + 2y = 10
library(rSymPy)
sympy("var('x')")
sympy("var('y')")
sympy("solve([
Eq(x+y, 20),
Eq(x+2*y, 10)
],
[x,y])")
# output
#[1] "{x: 30, y: -10}"
Use case: In my use case I want to symbolically solve a system of a mixed system of equality and inequality. Here's a reproduceable example:
x + y = 20; x + 2y > 10
The inequality can be sucessfully coded in rSymPy with Gt:
sympy("Gt(x+2*y, 10)")
# output
# [1] "10 < x + 2*y"
Problem: The code of the mixed system throughs an error:
sympy("solve([
Eq(x + y, 20),
Gt(x+2*y, 10)
],
[x,y])")
# output
# Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
# Traceback (most recent call last):
# File "<string>", line 1, in <module>
# File "/Users/.../R/3.0/library/rSymPy/Lib/sympy/solvers/solvers.py", line 308, in solve
# raise NotImplementedError()
# NotImplementedError
Question: How can I refactor the code successfully to solve the mixed system?
1) Define a positive variable z. Then the system can be recast as a system of equalities in terms of z:
x <- Var('x')
y <- Var('y')
z <- Var('z')
sympy("solve( [ Eq(x+y, 20), Eq(x + 2*y - z, 10) ], [x, y] )")
giving:
[1] "{x: 30 - z, y: -10 + z}"
2) This is a linear programming problem so if you are just looking for any feasible solution to the constraints then the lpSolve package can provide such. In this case it gives the solution corresponding to z=10 in (1):
library(lpSolve)
out <- lp(, c(0, 0), matrix(c(1, 1, 1, 2), 2), c("=", ">"), c(20, 10))
out$solution
## [1] 20 0
ADDED first solution in response to comment from poster. Added some further discussion.
It looks like NotImplmentedError is coming form SymPy itself. It looks like it can't solve multivariate inequalities. It can only reduce them (which is what your example did). It doesn't appear that the library supports that type of system.
def _solve_inequality(ie, s, assume=True):
""" A hacky replacement for solve, since the latter only works for
univariate inequalities. """
if not ie.rel_op in ('>', '>=', '<', '<='):
raise NotImplementedError
expr = ie.lhs - ie.rhs
try:
p = Poly(expr, s)
if p.degree() != 1:
raise NotImplementedError
except (PolynomialError, NotImplementedError):
try:
n, d = expr.as_numer_denom()
return reduce_rational_inequalities([[ie]], s, assume=assume)
except PolynomialError:
return solve_univariate_inequality(ie, s, assume=assume)
a, b = p.all_coeffs()
if a.is_positive:
return ie.func(s, -b/a)
elif a.is_negative:
return ie.func(-b/a, s)
else:
raise NotImplementedError
In the meantime I discovered the excellent LIM package which nicely allows symbolical solution to various kinds of linear inverse problems:
The ASCII file linprog.lim contains the symbolical problem in a human readable form (it's located in the same directory as the R code):
## UNKNOWNS
X
Y
## END UNKNOWNS
## EQUALITIES
X + Y = 20
## END EQUALITIES
## INEQUALITIES
X + 2 * Y > 10
## END INEQUALITIES
## PROFIT
X + Y
## END PROFIT
Following R code provides the solution to the linear programming problem:
require(LIM)
model= Setup("linprog.lim")
model.solved= Linp(model, ispos=F, verbose=T)
model.solved
Output:
$residualNorm
[1] 3.552714e-15
$solutionNorm
[1] 20
$X
X Y
[1,] 20 0
Here's another approach using the Rglpk library which can read in and solve GNU's MathProg scripted problems in R:
# in case CRAN install does not work
install.packages("Rglpk", repos="http://cran.us.r-project.org")
library(Rglpk)
## read file
x= Rglpk_read_file("mathprog1.mod", type = "MathProg", verbose=T)
## optimize
Rglpk_solve_LP(obj= x$objective,
mat= x$constraints[[1]],
dir= x$constraints[[2]],
rhs= x$constraints[[3]],
bounds= x$bounds,
types= x$types,
max= x$maximum)
File mathprog1.mod:
# Define Variables
var x;
var y;
# Define Constraints
s.t. A: x + y = 20;
s.t. B: x + 2*y >= 10;
# Define Objective
maximize z: x + y;
# Solve
solve;
end;
R console output:
# $optimum
# [1] 20
# $solution
# [1] 0 20
# $status
# [1] 0

How to write lp object to lp file?

I have been using lpSolve and lpSolveAPI. I build my constraint matrix, objective function etc and feed to the lp function and this works just fine. I want to save the problem as an lp file using write.lp and am having trouble. I keep getting an error telling me that the object is not an lp object. Any ideas?
> x1 = lp(direction = "min", cost, A , ">=",r,,3:13, , , ,FALSE)
> class(x1)
[1] "lp"
>write.lp(x1, filename, type = "lp",use.names = c(TRUE, TRUE))
Error in write.lp(x1, filename, type = "lp", use.names = c(TRUE, TRUE)) :
the lp argument does not appear to be a valid linear program record
I don't think you can mix between these two packages (lpSolveAPI doesn't import or depend on lpSolve). Consider a simple LP in lpSolve:
library(lpSolve)
costs <- c(1, 2)
mat <- diag(2)
dirs <- rep(">=", 2)
rhs <- c(1, 1)
x1 = lp("min", costs, mat, dirs, rhs)
x1
# Success: the objective function is 3
Based on the project website for lpSolveAPI, you do the same thing with something like:
library(lpSolveAPI)
x2 = make.lp(0, ncol(mat))
set.objfn(x2, costs)
for (idx in 1:nrow(mat)) {
add.constraint(x2, mat[idx,], dirs[idx], rhs[idx])
}
Now, we can solve and observe the solution:
x2
# Model name:
# C1 C2
# Minimize 1 2
# R1 1 0 >= 1
# R2 0 1 >= 1
# Kind Std Std
# Type Real Real
# Upper Inf Inf
# Lower 0 0
solve(x2)
# [1] 0
get.objective(x2)
# [1] 3
get.variables(x2)
# [1] 1 1
Getting back to the question, we can now write it out to a file:
write.lp(x2, "myfile.lp")
Here's the contents of the file:
/* Objective function */
min: +C1 +2 C2;
/* Constraints */
R1: +C1 >= 1;
R2: +C2 >= 1;

Optimization with Constraints

I am working with the output from a model in which there are parameter estimates that may not follow a-priori expectations. I would like to write a function that forces these utility estimates back in line with those expectations. To do this, the function should minimize the sum of the squared deviance between the starting values and the new estimates. Since we have a-priori expections, the optimization should be subject to the following constraints:
B0 < B1
B1 < B2
...
Bj < Bj+1
For example, the raw parameter estimates below are flipflopped for B2 and B3. The columns Delta and Delta^2 show the deviance between the original parameter estimate and the new coefficient. I am trying to minimize the column Delta^2. I've coded this up in Excel and shown how Excel's Solver would optimize this problem providing the set of constraints:
Beta BetaRaw Delta Delta^2 BetaNew
B0 1.2 0 0 1.2
B1 1.3 0 0 1.3
B2 1.6 -0.2 0.04 1.4
B3 1.4 0 0 1.4
B4 2.2 0 0 2.2
After reading through ?optim and ?constrOptim, I'm not able to grok how to set this up in R. I'm sure I'm just being a bit dense, but could use some pointers in the right direction!
3/24/2012 - Added bounty since I'm not smart enough to translate the first answer.
Here's some R code that should be on the right path. Assuming that the betas start with:
betas <- c(1.2,1.3,1.6,1.4,2.2)
I want to minimize the following function such that b0 <= b1 <= b2 <= b3 <= b4
f <- function(x) {
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
x5 <- x[5]
loss <- (x1 - betas[1]) ^ 2 +
(x2 - betas[2]) ^ 2 +
(x3 - betas[3]) ^ 2 +
(x4 - betas[4]) ^ 2 +
(x5 - betas[5]) ^ 2
return(loss)
}
To show that the function works, the loss should be zero if we pass the original betas in:
> f(betas)
[1] 0
And relatively large with some random inputs:
> set.seed(42)
> f(rnorm(5))
[1] 8.849329
And minimized at the values I was able to calculate in Excel:
> f(c(1.2,1.3,1.4,1.4,2.2))
[1] 0.04
1.
Since the objective is quadratic and the constraints linear,
you can use solve.QP.
It finds the b that minimizes
(1/2) * t(b) %*% Dmat %*% b - t(dvec) %*% b
under the constraints
t(Amat) %*% b >= bvec.
Here, we want b that minimizes
sum( (b-betas)^2 ) = sum(b^2) - 2 * sum(b*betas) + sum(beta^2)
= t(b) %*% t(b) - 2 * t(b) %*% betas + sum(beta^2).
Since the last term, sum(beta^2), is constant, we can drop it,
and we can set
Dmat = diag(n)
dvec = betas.
The constraints are
b[1] <= b[2]
b[2] <= b[3]
...
b[n-1] <= b[n]
i.e.,
-b[1] + b[2] >= 0
- b[2] + b[3] >= 0
...
- b[n-1] + b[n] >= 0
so that t(Amat) is
[ -1 1 ]
[ -1 1 ]
[ -1 1 ]
[ ... ]
[ -1 1 ]
and bvec is zero.
This leads to the following code.
# Sample data
betas <- c(1.2, 1.3, 1.6, 1.4, 2.2)
# Optimization
n <- length(betas)
Dmat <- diag(n)
dvec <- betas
Amat <- matrix(0,nr=n,nc=n-1)
Amat[cbind(1:(n-1), 1:(n-1))] <- -1
Amat[cbind(2:n, 1:(n-1))] <- 1
t(Amat) # Check that it looks as it should
bvec <- rep(0,n-1)
library(quadprog)
r <- solve.QP(Dmat, dvec, Amat, bvec)
# Check the result, graphically
plot(betas)
points(r$solution, pch=16)
2.
You can use constrOptim in the same way (the objective function can be arbitrary, but the constraints have to be linear).
3.
More generally, you can use optim if you reparametrize the problem
into a non-constrained optimization problem,
for instance
b[1] = exp(x[1])
b[2] = b[1] + exp(x[2])
...
b[n] = b[n-1] + exp(x[n-1]).
There are a few examples
here
or there.
Alright, this is starting to take form, but still has some bugs. Based on the conversation in chat with #Joran, it seems I can include a conditional that will set the loss function to an arbitrarily large value if the values are not in order. This seems to work IF the discrepancy occurs between the first two coefficients, but not thereafter. I'm having a hard time parsing out why that would be the case.
Function to minimize:
f <- function(x, x0) {
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
x5 <- x[5]
loss <- (x1 - x0[1]) ^ 2 +
(x2 - x0[2]) ^ 2 +
(x3 - x0[3]) ^ 2 +
(x4 - x0[4]) ^ 2 +
(x5 - x0[5]) ^ 2
#Make sure the coefficients are in order
if any(diff(c(x1,x2,x3,x4,x5)) > 0) loss = 10000000
return(loss)
}
Working example (sort of, it seems the loss would be minimized if b0 = 1.24?):
> betas <- c(1.22, 1.24, 1.18, 1.12, 1.10)
> optim(betas, f, x0 = betas)$par
[1] 1.282 1.240 1.180 1.120 1.100
Non-working example (note that the third element is still larger than the second:
> betas <- c(1.20, 1.15, 1.18, 1.12, 1.10)
> optim(betas, f, x0 = betas)$par
[1] 1.20 1.15 1.18 1.12 1.10

Resources