I am struggling to formulate the following subtour elimination constraint for TSP-like problem in Pyomo, given a graph G(V,A) where node 1 is the depot:
where x_ij and y_h are binary constraints that I have previously defined as binary variables.
First, I created a dictionary of all possible subsets S such that node 1 is always contained: subsets_s.
Then, I have been trying with something like this, but I am running into errors:
model1.con3=ConstraintList()
for h in model1.V:
if h is not 1:
for i in model1.S:
if h not in subsets_s[i]['nodes_subset']:
S=subsets_s[i]['nodes_subset']
for v in S:
print(v)
model1.con3.add(sum(sum(model1.x[v,j]) for j in
model1.V if j not in S)>=model1.y[h])
Do you have any suggestions?
Thank you
Related
Here is my question: from a HMM model I created, I want to create a network using the transition matrix (with the probability to go from a state to another) as an adjacency matrix. There are no 0s in this matrix, neither on the diagonal, because you can go from state 1 to state 1 again.
How can I be sure that the network created is actually using the weights (the values in the transition matrix) to create communities?
I'm using the following code:
import community
import networkx as nx
G = nx.from_numpy_matrix(tm, parallel_edges=False, create_using=None)
# Relabel nodes
G = nx.relabel_nodes(G, {i: f"node_{i}" for i in G.nodes})
# Compute partition
partition = community.best_partition(G)
# Get a set of the communities
communities = set(partition.values())
# Create a dictionary mapping community number to nodes within that community
communities_dict = {c: [k for k, v in partition.items() if v == c] for c in communities}
communities_dict
and the output makes sense: 5 communities (0 to 4) grouping 25 nodes (from 0 to 24)
{0: ['node_0', 'node_1', 'node_14', 'node_17'],
1: ['node_5', 'node_10', 'node_11', 'node_12'],
2: ['node_2', 'node_20', 'node_24'],
3: ['node_3', 'node_8', 'node_9', 'node_15', 'node_18', 'node_22'],
4: ['node_4', 'node_6', 'node_7', 'node_13', 'node_16', 'node_19', 'node_21', 'node_23']}
I noticed a post here on stack overflow in which they used
partition = community.best_partition(G, weight='weights')
I tried to implement it, and the results are awful: each node is a community (for a total of 25 communities on 25 nodes).
My question is: am I doing good with the first code? Or the correct one is the other one which return bad communities?
I'm trying to maximize the portfolio return subject to 5 constraints:
1.- a certain level of portfolio risk
2.- the same above but oposite sign (I need that the risk to be exactly that number)
3.- the sum of weights have to be 1
4.- all the weights must be greater or equal to cero
5.- all the weights must be at most one
I'm using the optiSolve package because I didn't find any other package that allow me to write this problem (or al least that I understood how to use it).
I have three big problems here, the first is that the resulting weights vector sum more than 1 and the second problem is that I can't declare t(w) %*% varcov_matrix %*% w == 0 in the quadratic constraint because it only allows for "<=" and finally I don't know how to put a constraint to get only positives weights
vector_de_retornos <- rnorm(5)
matriz_de_varcov <- matrix(rnorm(25), ncol = 5)
library(optiSolve)
restriccion1 <- quadcon(Q = matriz_de_varcov, dir = "<=", val = 0.04237972)
restriccion1_neg <- quadcon(Q = -matriz_de_varcov, dir = "<=",
val = -mean(limite_inf, limite_sup))
restriccion2 <- lincon(t(vector_de_retornos),
d=rep(0, nrow(t(vector_de_retornos))),
dir=rep("==",nrow(t(vector_de_retornos))),
val = rep(1, nrow(t(vector_de_retornos))),
id=1:ncol(t(vector_de_retornos)),
name = nrow(t(vector_de_retornos)))
restriccion_nonnegativa <- lbcon(rep(0,length(vector_de_retornos)))
restriccion_positiva <- ubcon(rep(1,length(vector_de_retornos)))
funcion_lineal <- linfun(vector_de_retornos, name = "lin.fun")
funcion_obj <- cop(funcion_lineal, max = T, ub = restriccion_positiva,
lc = restriccion2, lb = restriccion_nonnegativa, restriccion1,
restriccion1_neg)
porfavor_funciona <- solvecop(funcion_obj, solver = "alabama")
> porfavor_funciona$x
1 2 3 4 5
-3.243313e-09 -4.709673e-09 9.741379e-01 3.689040e-01 -1.685290e-09
> sum(porfavor_funciona$x)
[1] 1.343042
Someone knows how to solve this maximization problem with all the constraints mentioned before or tell me what I'm doing wrong? I'll really appreciate that, because the result seems like is not taking into account the constraints. Thanks!
Your restriccion2 makes the weighted sum of x is 1, if you also want to ensure the regular sum of x is 1, you can modify the constraint as follows:
restriccion2 <- lincon(rbind(t(vector_de_retornos),
# make a second row of coefficients in the A matrix
t(rep(1,length(vector_de_retornos)))),
d=rep(0,2), # the scalar value for both constraints is 0
dir=rep('==',2), # the direction for both constraints is '=='
val=rep(1,2), # the rhs value for both constraints is 1
id=1:ncol(t(vector_de_retornos)), # the number of columns is the same as before
name= 1:2)
If you only want the regular sum to be 1 and not the weighted sum you can replace your first parameter in the lincon function as you've defined it to be t(rep(1,length(vector_de_retornos))) and that will just constrain the regular sum of x to be 1.
To make an inequality constraint using only inequalities you need the same constraint twice but with opposite signs on the coefficients and right hand side values between the two (for example: 2x <= 4 and -2x <= -4 combines to make the constraint 2*x == 4). In your edit above, you provide a different value to the val parameter so these two constraints won't combine to make the equality constraint unless they match except for opposite signs as below.
restriccion1_neg <- quadcon(Q = -matriz_de_varcov, dir = "<=", val = -0.04237972)
I'm not certain because I can't find precision information in the package documentation, but those "negative" values in the x vector are probably due to rounding. They are so small and are effectively 0 so I think the non-negativity constraint is functioning properly.
restriccion_nonnegativa <- lbcon(rep(0,length(vector_de_retornos)))
A constraint of the form
x'Qx = a
is non-convex. (More general: any nonlinear equality constraint is non-convex). Non-convex problems are much more difficult to solve than convex ones and require specialized, global solvers. For convex problems, there are quite a few solvers available. This is not the case for non-convex problems. Most portfolio models are formulated as convex QP (quadratic programming i.e. risk -- the quadratic term -- is in the objective) or convex QCP/SOCP problems (quadratic terms in the constraints, but in a convex fashion). So, the constraint
x'Qx <= a
is easy (convex), as long as Q is positive-semi definite. Rewriting x'Qx=a as
x'Qx <= a
-x'Qx <= -a
unfortunately does not make the non-convexity go away, as -Q is not PSD. If we are maximizing return, we usually only use x'Qx <= a to limit the risk and forget about the >= part. Even more popular is to put both the return and the risk in the objective (that is the standard mean-variable portfolio model).
A possible solver for solving non-convex quadratic problems under R is Gurobi.
I have written an R code to solve the following equations jointly. These are closed-form solutions that require numerical procedure.
I further divided the numerator and denominator of (B) by N to get arithmetic means.
Here is my code:
y=cbind(Sta,Zta,Ste,Zte) # combine the variables
St=as.matrix(y[,c(1,3)])
Stm=c(mean(St[,1]), mean(St[,2])); # Arithmetic means of St's
Zt=as.matrix(y[,c(2,4)])
Ztm=c(mean(Zt[,1]), mean(Zt[,2])); # Arithmetic means of Zt's
theta=c(-20, -20); # starting values for thetas
tol=c(10^-4, 10^-4);
err=c(0,0);
epscon=-0.1
while (abs(err) > tol | phicon<0) {
### A
eps = ((mean(y[,2]^2))+mean(y[,4]^2))/(-mean(y[,1]*y[,2])+theta[1]*mean(y[,2])-mean(y[,3]*y[,4])+theta[2]*mean(y[,4]))
### B
thetan = Stm + (1/eps)*Ztm
err=thetan-theta
theta=thetan
epscon=1-eps
print(c(ebs,theta))
}
Iteration does not stop as the second condition of while loop is not met, the solution is a positive epsilon. I would like to get a negative epsilon. This, I guess requires a grid search or a range of starting values for the Thetas.
Can anyone please help code this process differently and more efficiently? Or help correct my code if there are flaws in it.
Thank you
If I am right, using linearity your equations have the form
ΘA = a + b / ε
ΘB = c + d / ε
1/ε = e ΘA + f ΘB + g
This is an easy 3x3 linear system.
I'm trying to tackle a nonlinear optimization problem where the objective functions are non-linear and constraints are linear. I read a bit on the ROI package in R and I decided to use the same. However, I am facing a problem while solving the optimization problem.
I am essentially trying to minimize the area under a supply-demand curve. The equation for the supply and demand curves are defined in the code:
Objective function: minimize (Integral of supply curve + integral of demand curve),
subject to constraints q greater than or equal to 34155 (stored in a variable called ICR),
q greater than or equal to 0
and q less than or equal to 40000.
I have tried to run this through the ROI package in RStudio and I keep getting an error telling me that there is no solver to be found.
library(tidyverse)
library(ROI)
library(rSymPy)
library(mosaicCalc)
# Initializing parameters for demand curve
A1 <- 6190735.2198302800
B1 <- -1222739.9618776600
C1 <- 103427.9556133250
D1 <- -4857.0627045073
E1 <- 136.7660814828
# Initializing parameters for Supply Curve
S1 <- -1.152
S2 <- 0.002
S3 <- a-9.037e-09
S4 <- 2.082e-13
S5 <- -1.64e-18
ICR <- 34155
demand_curve_integral <- antiD(A1 + B1*q + C1*(q^2)+ D1*(q^3) + E1*(q^4) ~q)
supply_curve_integral <- antiD(S1 + S2*(q) + S3*(q^2) + S4*(q^3) + S5*(q^4)~q)
# Setting up the objective function
obj_func <- function(q){ (18.081*demand_curve_integral(q))+supply_curve_integral(q)}
# Setting up the optimization Problem
lp <- OP(objective = F_objective(obj_func, n=1L),
constraints=L_constraint(L=matrix(c(1, 1, 1), nrow=3),
dir=c(">=", ">=", "<="),
rhs=c(ICR, 0, 40000, 1))),
maximum = FALSE)
sol <- ROI_solve(lp)
This is the error that I keep getting in RStudio:
Error in ROI_solve(lp) : no solver found for this signature:
objective: F
constraints: L
bounds: V
cones: X
maximum: FALSE
C: TRUE
I: FALSE
B: FALSE
What should I do to rectify this error?
In general you could use ROI.plugin.alabama or ROI.plugin.nloptr for this optimization problem.
But I looked at the problem and this raised several questions.
a is not defined in the code.
You state that q has length 1 and add 3 linear constraints the constraints say
q >= 34155, q >= 0, q <= 40000 or q <= 1
I am not entirely sure since the length of rhs is 4 but L and dir
suggest there are only 3 linear constraints.
How should the constraint look like?
34155 <= q <= 40000?
Then you could specify the constraint as bounds and use ROI.plugin.optimx
or since you have a one dimensional optimization problem just use optimize
from the stats package https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optimize.html.
I haven't run NLP using ROI. But you have to install an ROI solver plug-in and then load the library in your code. The current solver plug-ins are:
library(ROI.plugin.glpk)
library(ROI.plugin.lpsolve)
library(ROI.plugin.neos)
library(ROI.plugin.symphony)
library(ROI.plugin.cplex)
Neos provides access to NLP solvers but I don't know how to pass solver parameters via an ROI plug-in function call.
https://neos-guide.org/content/nonlinear-programming
I new to OpenMDAO and I'm still learning how to formulate the problems.
For a simple example, let's say I have 3 input variables with the given bounds:
1 <= x <= 10
0 <= y <= 10
1 <= z <= 10
and I have 4 outputs, defined as:
f1 = x * y
f2 = 2 * z
g1 = x + y - 1
g2 = z
my goal is to minimize f1 * g1, but enforce the constraint f1 = f2 and g1 = g2. For example, one solution is x=3, y=4, z=6 (no idea if this is optimal).
For this simple problem, you can probably just feed the output equality constraints to the driver. However, for my actual problem it's hard to find an initial starting point that satisfy all the constraints, and as the result the optimizer failed to do anything. I figure maybe I could define y and z as states in an implicit component and have a nonlinear solver figure out the right values of y and z given x, then feed x to the optimization driver.
Is this a possible approach? If so, how will the implicit component look like in this case? I looked at the Sellar problem tutorial but I wasn't able to translate it to this case.
You could create an implicit component if you want. In that case, you would define an apply_linear method in your component. That is done with the sellar problem here.
In your case since you have a 2 equation set of residuals which are both dependent on the state variables, I suggest you create a single array state variable of length 2, call it foo (I used a new variable to avoid any confusion, but name it whatever you want!). Then you will define two residuals, one for each element of the residual array of the new state variable.
Something like:
resids['foo'][0] = params['x'] * unknowns['foo'][0] - 2 * unknowns['foo'][1]
resids['foo'][1] = params['x'] + unknowns['foo'][0] - 1 - unknowns['foo'][1]
If you wanted to keep the state variable names separate you could, and it will still work. You'll just have to arbitrarily assign one residual equation to one variable and one to the other.
Then the only thing left is to add a non linear solver to the group containing your implicit component and it should work. If you choose to use a newton solver, you'll either need to set fd_options['force_fd'] = True or define derivatives of your residuals wrt all params and state variables.